More stories

  • in

    Words prove their worth as teaching tools for robots

    Exploring a new way to teach robots, Princeton researchers have found that human-language descriptions of tools can accelerate the learning of a simulated robotic arm lifting and using a variety of tools.
    The results build on evidence that providing richer information during artificial intelligence (AI) training can make autonomous robots more adaptive to new situations, improving their safety and effectiveness.
    Adding descriptions of a tool’s form and function to the training process for the robot improved the robot’s ability to manipulate newly encountered tools that were not in the original training set. A team of mechanical engineers and computer scientists presented the new method, Accelerated Learning of Tool Manipulation with LAnguage, or ATLA, at the Conference on Robot Learning on Dec. 14.
    Robotic arms have great potential to help with repetitive or challenging tasks, but training robots to manipulate tools effectively is difficult: Tools have a wide variety of shapes, and a robot’s dexterity and vision are no match for a human’s.
    “Extra information in the form of language can help a robot learn to use the tools more quickly,” said study coauthor Anirudha Majumdar, an assistant professor of mechanical and aerospace engineering at Princeton who leads the Intelligent Robot Motion Lab.
    The team obtained tool descriptions by querying GPT-3, a large language model released by OpenAI in 2020 that uses a form of AI called deep learning to generate text in response to a prompt. After experimenting with various prompts, they settled on using “Describe the [feature] of [tool] in a detailed and scientific response,” where the feature was the shape or purpose of the tool. More

  • in

    New and improved multi-band operational receiver for 5G new radio communication

    An ultra-wide-band receiver based on a harmonic selection technique to improve the operational bandwidth of 5G networks has been developed by Tokyo Tech researchers in a new study. Fifth generation (5G) mobile networks are now being used worldwide with frequencies of over 100 Hz. To keep up with the data traffic in these networks, appropriate receivers are necessary. In this regard, the proposed technology could revolutionize the world of next-generation communications.
    As next-generation communication networks are being developed, the technology used to deploy them must also evolve alongside. Fifth generation mobile network New Radio (5G NR) bands are continuously expanding to improve the channel capacity and data rate. To realize cross-standard communication and worldwide application using 5G NR, multi-band compatibility is, therefore, essential.
    Recently, millimeter-wave (mmW) communication has been considered a promising candidate for managing the ever-increasing data traffic between large devices in 5G NR networks. In the past few years, many studies have shown that a phased-array architecture improves the signal quality for 5G NR communication at mmW frequencies. Unfortunately, multiple chips are needed for multi-band operation, which increases the system size and complexity. Moreover, operating in multi-band modes exposes the receivers to changing electromagnetic environments, leading to cross-talk and cluttered signals with unwanted echoes.
    To address these issues, a team of researchers from Tokyo Institute of Technology (Tokyo Tech) in Japan has now developed a novel “harmonic-selection technique” for extending the operational bandwidth of 5G NR communication. The study, led by Professor Kenichi Okada, was published in the IEEE Journal of Solid-State Circuits. “Compared to conventional systems, our proposed network operates at low power consumption. Additionally, the frequency coverage makes it compatible with all existing 5G bands, as well as the 60 GHz earmarked as the next potential licensed band. As such, our receiver could be the key to utilizing the ever-growing 5G bandwidth,” says Prof. Okada.
    To fabricate the proposed dual-channel multi-band phased-array receiver, the team used a 65-nm CMOS process. The chip size was measured to be just 3.2 mm x 1.4 mm, which included the receiver with two channels.
    The team took a three-pronged approach to tackle the problems with 5G NR communication. The first was to use a harmonic-selection technique using a tri-phase local oscillator (LO) to drive the mixer. This technique decreased the needed LO frequency coverage while allowing for multi-band down-conversion. The second was to use a dual-mode multi-band low-noise amplifier (LNA). The LNA structure not only improved the power efficiency and tolerance of the inter-band blocker (reducing interference from other bands) but also achieved a good balance between circuit performance and chip area. Finally, the third prong was the receiver, which utilized a Hartley receiver’s architecture to improve image rejections. The team introduced a single-stage hybrid-type polyphase filter (PPF) for sideband selection and image rejection calibration.
    The team found that the proposed technique outperformed other state-of-the-art multi-band receivers. The harmonic-selection technique enabled operation between (24.25 — 71) GHz while showing above 36-dB inter-band blocker rejection. Additionally, the power consumed by the receiver was low (36 mW, 32 mW, 51 mW, and 75 mW at frequencies of 28 GHz, 39 GHz, 47.2 GHz, and 60.1 GHz, respectively).
    “By combining a dual-mode multi-band LNA with a polyphase filter, the device realizes rejections to inter-band blockers better than other state-of-the-art filters. This means that for currently used bands, the rejections are better than 50dB and over 36dB for the entire supported (24-71) GHz operation region. With new 5G frequency bands on the horizon, such low-noise broadband receivers will prove to be useful,” concludes an optimistic Prof. Okada.
    Story Source:
    Materials provided by Tokyo Institute of Technology. Note: Content may be edited for style and length. More

  • in

    Cheerful chatbots don't necessarily improve customer service

    Imagine messaging an artificial intelligence (AI) chatbot about a missing package and getting the response that it would be “delighted” to help. Once the bot creates the new order, they say they are “happy” to resolve the issue. After, you receive a survey about your interaction, but would you be likely to rate it as positive or negative?
    This scenario isn’t that far from reality, as AI chatbots are already taking over online commerce. By 2025, 95% of companies will have an AI chatbot, according to Finance Digest. AI might not be sentient yet, but it can be programmed to express emotions.
    Humans displaying positive emotions in customer service interactions have long been known to improve customer experience, but researchers at the Georgia Institute of Technology’s Scheller College of Business wanted to see if this also applied to AI. They conducted experimental studies to determine if positive emotional displays improved customer service and found that emotive AI is only appreciated if the customer expects it, and it may not be the best avenue for companies to invest in.
    “It is commonly believed and repeatedly shown that human employees can express positive emotion to improve customers’ service evaluations,” said Han Zhang, the Steven A. Denning Professor in Technology & Management. “Our findings suggest that the likelihood of AI’s expression of positive emotion to benefit or hurt service evaluations depends on the type of relationship that customers expect from the service agent.”
    The researchers presented their findings in the paper, “Bots With Feelings: Should AI Agents Express Positive Emotion in Customer Service?,” in Information Systems Research in December.
    Studying AI Emotion
    The researchers conducted three studies to expand the understanding of emotional AI in customer service transactions. Although they changed the participants and scenario in each study, AI chatbots imbued with emotion used positive emotional adjectives, such as excited, delighted, happy, or glad. They also deployed more exclamation points. More

  • in

    Characters' actions in movie scripts reflect gender stereotypes

    Researchers have developed a novel machine-learning framework that uses scene descriptions in movie scripts to automatically recognize different characters’ actions. Applying the framework to hundreds of movie scripts showed that these actions tend to reflect widespread gender stereotypes, some of which are found to be consistent across time. Victor Martinez and colleagues at the University of Southern California, U.S., present these findings in the open-access journal PLOS ONE on December 21.
    Movies, TV shows, and other media consistently portray traditional gender stereotypes, some of which may be harmful. To deepen understanding of this issue, some researchers have explored the use of computational frameworks as an efficient and accurate way to analyze large amounts of character dialogue in scripts. However, some harmful stereotypes might be communicated not through what characters say, but through their actions.
    To explore how characters’ actions might reflect stereotypes, Martinez and colleagues used a machine-learning approach to create a computational model that can automatically analyze scene descriptions in movie scripts and identify different characters’ actions. Using this model, the researchers analyzed over 1.2 million scene descriptions from 912 movie scripts produced from 1909 to 2013, identifying fifty thousand actions performed by twenty thousand characters.
    Next, the researchers conducted statistical analyses to examine whether there were differences between the types of actions performed by characters of different genders. These analyses identified a number of differences that reflect known gender stereotypes.
    For instance, they found that female characters tend to display less agency than male characters, and that female characters are more likely to show affection. Male characters are less likely to “sob” or “cry,” and female characters are more likely to be subjected to “gawking” or “watching” by other characters, highlighting an emphasis on female appearance.
    While the researchers’ model is limited by the extent of its ability to fully capture nuanced societal context relating the script to each scene and the overall narrative, these findings align with prior research on gender stereotypes in popular media, and could help raise awareness of how media might perpetuate harmful stereotypes and thereby influence people’s real-life beliefs and actions. In the future, the new machine-learning framework could be refined and applied to incorporate notions of intersectionality such as between gender, age, and race, to deepen understanding of this issue
    The authors add: “Researchers have proposed using machine-learning methods to identify stereotypes in character dialogues in media, but these methods do not account for harmful stereotypes communicated through character actions. To address this issue, we developed a large-scale machine-learning framework that can identify character actions from movie script descriptions. By collecting 1.2 million scene descriptions from 912 movie scripts, we were able to study systematic gender differences in movie portrayals at a large scale.”
    Story Source:
    Materials provided by PLOS. Note: Content may be edited for style and length. More

  • in

    At the edge of graphene-based electronics

    A pressing quest in the field of nanoelectronics is the search for a material that could replace silicon. Graphene has seemed promising for decades. But its potential faltered along the way, due to damaging processing methods and the lack of a new electronics paradigm to embrace it. With silicon nearly maxed out in its ability to accommodate faster computing, the next big nanoelectronics platform is needed now more than ever.
    Walter de Heer, Regents’ Professor in the School of Physics at the Georgia Institute of Technology, has taken a critical step forward in making the case for a successor to silicon. De Heer and his collaborators developed a new nanoelectronics platform based on graphene — a single sheet of carbon atoms. The technology is compatible with conventional microelectronics manufacturing, a necessity for any viable alternative to silicon. In the course of their research, published in Nature Communications, the team may have also discovered a new quasiparticle. Their discovery could lead to manufacturing smaller, faster, more efficient, and more sustainable computer chips, and has potential implications for quantum and high-performance computing.
    “Graphene’s power lies in its flat, two-dimensional structure that is held together by the strongest chemical bonds known,” de Heer said. “It was clear from the beginning that graphene can be miniaturized to a far greater extent than silicon — enabling much smaller devices, while operating at higher speeds and producing much less heat. This means that, in principle, more devices can be packed on a single chip of graphene than with silicon.”
    In 2001, de Heer proposed an alternative form of electronics based on epitaxial graphene, or epigraphene — a layer of graphene that was found to spontaneously form on top of silicon carbide crystal, a semiconductor used in high power electronics. At the time, researchers found that electric currents flow without resistance along epigraphene’s edges, and that graphene devices could be seamlessly interconnected without metal wires. This combination allows for a form of electronics that relies on the unique light-like properties of graphene electrons.
    “Quantum interference has been observed in carbon nanotubes at low temperatures, and we expect to see similar effects in epigraphene ribbons and networks,” de Heer said. “This important feature of graphene is not possible with silicon.”
    Building the Platform
    To create the new nanoelectronics platform, the researchers created a modified form of epigraphene on a silicon carbide crystal substrate. In collaboration with researchers at the Tianjin International Center for Nanoparticles and Nanosystems at the University of Tianjin, China, they produced unique silicon carbide chips from electronics-grade silicon carbide crystals. The graphene itself was grown at de Heer’s laboratory at Georgia Tech using patented furnaces. More

  • in

    The physical intelligence of ant and robot collectives

    Individual ants are relatively simple creatures and yet a colony of ants can perform really complex tasks, such as intricate construction, foraging and defense.
    Recently, Harvard researchers took inspiration from ants to design a team of relatively simple robots that can work collectively to perform complex tasks using only a few basic parameters.
    The research was published in ELife.
    “This project continued along an abiding interest in understanding the collective dynamics of social insects such as termites and bees, especially how these insects can manipulate the environment to create complex functional architectures,” said L Mahadevan, the Lola England de Valpine Professor of Applied Mathematics, of Organismic and Evolutionary Biology, and Physics, and senior author of paper.
    The research team began by studying how black carpenter ants work together to excavate out of and escape from a soft corral.
    “At first, the ants inside the corral moved around randomly, communicating via their antennae before they started working together to escape the corral,” said S Ganga Prasath, a postdoctoral fellow at the Harvard John A. Paulson School of Engineering and Applied Sciences and one of the lead authors of the paper. More

  • in

    Should we tax robots?

    What if the U.S. placed a tax on robots? The concept has been publicly discussed by policy analysts, scholars, and Bill Gates (who favors the notion). Because robots can replace jobs, the idea goes, a stiff tax on them would give firms incentive to help retain workers, while also compensating for a dropoff in payroll taxes when robots are used. Thus far, South Korea has reduced incentives for firms to deploy robots; European Union policymakers, on the other hand, considered a robot tax but did not enact it.
    Now a study by MIT economists scrutinizes the existing evidence and suggests the optimal policy in this situation would indeed include a tax on robots, but only a modest one. The same applies to taxes on foreign trade that would also reduce U.S. jobs, the research finds.
    “Our finding suggests that taxes on either robots or imported goods should be pretty small,” says Arnaud Costinot, an MIT economist, and co-author of a published paper detailing the findings. “Although robots have an effect on income inequality … they still lead to optimal taxes that are modest.”
    Specifically, the study finds that a tax on robots should range from 1 percent to 3.7 percent of their value, while trade taxes would be from 0.03 percent to 0.11 percent, given current U.S. income taxes.
    “We came in to this not knowing what would happen,” says Iván Werning, an MIT economist and the other co-author of the study. “We had all the potential ingredients for this to be a big tax, so that by stopping technology or trade you would have less inequality, but … for now, we find a tax in the one-digit range, and for trade, even smaller taxes.”
    The paper, “Robots, Trade, and Luddism: A Sufficient Statistic Approach to Optimal Technology Regulation,” appears in advance online form in The Review of Economic Studies. Costinot is a professor of economics and associate head of the MIT Department of Economics; Werning is the department’s Robert M. Solow Professor of Economics. More

  • in

    Crystalline materials: Making the unimaginable possible

    The world’s best artists can take a handful of differently colored paints and create a museum-worthy canvas that looks like nothing else. They do so by drawing upon inspiration, knowledge of what’s been done in the past and design rules they learned after years in the studio.
    Chemists work in a similar way when inventing new compounds. Researchers at the U.S. Department of Energy’s (DOE) Argonne National Laboratory, Northwestern University and The University of Chicago have developed a new method for discovering and making new crystalline materials with two or more elements.
    “We expect that our work will prove extremely valuable to the chemistry, materials and condensed matter communities for synthesizing new and currently unpredictable materials with exotic properties,” said Mercouri Kanatzidis, a chemistry professor at Northwestern with a joint appointment at Argonne.
    “Our invention method grew out of research on unconventional superconductors,” said Xiuquan Zhou, a postdoc at Argonne and first author of the paper. ​”These are solids with two or more elements, at least one of which is not a metal. And they cease to resist the passage of electricity at different temperatures — anywhere from colder than outer space to that in my office.”
    Over the last five decades, scientists have discovered and made many unconventional superconductors with surprising magnetic and electrical properties. Such materials have a wide gamut of possible applications, such as improved power generation, energy transmission and high-speed transportation. They also have the potential for incorporation into future particle accelerators, magnetic resonance imaging systems, quantum computers and energy-efficient microelectronics.
    The team’s invention method starts with a solution made of two components. One is a highly effective solvent. It dissolves and reacts with any solids added to the solution. The other is not as good a solvent. But it is there for tuning the reaction to produce a new solid upon addition of different elements. This tuning involves changing the ratio of the two components and the temperature. Here, the temperature is quite high, from 750 to 1,300 degrees Fahrenheit. More