More stories

  • in

    When consumers would prefer a chatbot over a person

    Actually, sometimes consumers don’t want to talk to a real person when they’re shopping online, a new study suggests.
    In fact, what they really want is a chatbot that makes it clear that it is not human at all.
    In a new study, researchers at The Ohio State University found that people preferred interacting with chatbots when they felt embarrassed about what they were buying online — items like antidiarrheal medicine or, for some people, skin care products.
    “In general, research shows people would rather interact with a human customer service agent than a chatbot,” said Jianna Jin, who led the study as a doctoral student at Ohio State’s Fisher College of Business.
    “But we found that when people are worried about others judging them, that tendency reverses and they would rather interact with a chatbot because they feel less embarrassed dealing with a chatbot than a human.”
    The study was published recently in the Journal of Consumer Psychology with study co-authors Jesse Walker, assistant professor, and Rebecca Walker Reczek, professor, both in marketing at Ohio State’s Fisher College.
    “Chatbots are becoming more and more common as customer service agents, and companies are not required in most states to disclose if they use them,” Reczek said. “But it may be important for companies to let consumers know if they’re dealing with a chatbot.”
    The new research explored what happened when consumers had what psychologists call self-presentation concerns — this is when people worry about how their behavior and actions may affect how others perceive them. Buying some products may trigger these concerns.

    In one of the five studies that was part of the Journal of Consumer Psychology paper, the researchers asked 386 undergraduate students to imagine buying either antidiarrheal or hay fever medication. They were given the choice between two online drug stores, one of which used chatbots and another that used customer service agents.
    When participants were told they were buying hay fever medication, which doesn’t cause most people to feel embarrassed, 91% said they would use the store that had human service agents. But when they were buying antidiarrheal medicine, 81% chose the store with the chatbots.
    But that’s just the beginning of the story. The researchers found in other studies that it was important how human the chatbots appeared and acted onscreen.
    In another study, participants were asked to imagine buying an antidiarrheal medicine from an online drugstore. They were then shown one of three live chat icons: One was a chatbot with an icon that was just a speech bubble, with no human characteristics; a second was a chatbot with a cartoon of a human; and the third featured a profile picture of a real clearly human woman.
    Both chatbots clearly identified themselves to participants as chatbots — but the one with the cartoon of a real human used more emotional language during the exchange, such as “I am so excited to see you!”
    Results showed that participants were more willing to receive information about the embarrassing product from the two chatbots than from the human. But the effect was not as strong for the chatbot with the human cartoon avatar that used more emotional language than the other chatbot.

    The fact that this chatbot had a cartoon human avatar and used emotional language may have left those in the study feeling uneasy and less willing to interact — even though they were told it was a chatbot, Walker said.
    “It was as if the participants were proactively protecting themselves against embarrassment by assuming the chatbot could be human,” Walker said.
    In another study, Jin actually designed a chatbot and had participants engage in a real back-and-forth interaction. Participants in this study were chosen because they all strongly agreed that they wanted to make a good impression on others with their skin.
    In other words, they had self-presentation concerns related to their skin and may have been interested in buying skincare products because they were embarrassed about their skin. Because of this, the researchers believed that they would respond more positively to clearly identified chatbots.
    Participants in the study were told they were interacting with an agent for a skincare brand and whether they were talking to a chatbot or a customer service representative. Participants answered a series of questions, including one in which they were asked if they would like to provide their email address to get a free sample of the brand.
    As the researchers hypothesized, participants were more likely to provide their email address if they thought they were interacting with a chatbot (62%) than a human (38%).
    In this study, as well as others, the researchers asked questions designed to get at why participants prefer chatbots when they had self-presentation concerns.
    Walker said the results of the study suggest chatbots decrease embarrassment because consumers perceive chatbots as less able to feel emotions and make appraisals about people.
    “Consumers feel less embarrassed because chatbots don’t have the level of consciousness and ability to judge them,” he said.
    Jin, who is now an assistant professor at the University of Notre Dame, said the results suggest companies need to pay attention to the role of chatbots in their business.
    “Managers may not realize the importance of using chatbots when consumers have self-presentation concerns,” she said.
    And as conversational AI continues to get better, it may become more difficult for consumers to tell the difference between chatbots and human service agents, Reczek said. That could be a problem for companies whose customers may prefer to interact with chatbots because of their self-presentation concerns and fears of embarrassment.
    “It is going to be even more important for firms to clearly disclose that they use chatbots if they want consumers to realize they are interacting with a bot,” Reczek said. More

  • in

    The universe may have a complex geometry — like a doughnut

    The cosmos may have something in common with a doughnut.

    In addition to their fried, sugary goodness, doughnuts are known for their shape, or in mathematical terms, their topology. In a universe with an analogous, complex topology, you could travel across the cosmos and end up back where you started. Such a cosmos hasn’t yet been ruled out, physicists report in the April 26 Physical Review Letters. 

    On a shape with boring, or trivial topology, any closed path you draw can be shrunk down to a point. For example, consider traveling around Earth. If you were to go all the way around the equator, that’s a closed loop, but you could squish that down by shifting your trip up to the North Pole. But the surface of a doughnut has complex, or nontrivial, topology (SN: 10/4/16). A loop that encircles the doughnut’s hole, for example, can’t be shrunk down, because the hole limits how far you can squish it.  More

  • in

    AI systems are already skilled at deceiving and manipulating humans

    Many artificial intelligence (AI) systems have already learned how to deceive humans, even systems that have been trained to be helpful and honest. In a review article publishing in the journal Patterns on May 10, researchers describe the risks of deception by AI systems and call for governments to develop strong regulations to address this issue as soon as possible.
    “AI developers do not have a confident understanding of what causes undesirable AI behaviors like deception,” says first author Peter S. Park, an AI existential safety postdoctoral fellow at MIT. “But generally speaking, we think AI deception arises because a deception-based strategy turned out to be the best way to perform well at the given AI’s training task. Deception helps them achieve their goals.”
    Park and colleagues analyzed literature focusing on ways in which AI systems spread false information — through learned deception, in which they systematically learn to manipulate others.
    The most striking example of AI deception the researchers uncovered in their analysis was Meta’s CICERO, an AI system designed to play the game Diplomacy, which is a world-conquest game that involves building alliances. Even though Meta claims it trained CICERO to be “largely honest and helpful” and to “never intentionally backstab” its human allies while playing the game, the data the company published along with its Science paper revealed that CICERO didn’t play fair.
    “We found that Meta’s AI had learned to be a master of deception,” says Park. “While Meta succeeded in training its AI to win in the game of Diplomacy — CICERO placed in the top 10% of human players who had played more than one game — Meta failed to train its AI to win honestly.”
    Other AI systems demonstrated the ability to bluff in a game of Texas hold ’em poker against professional human players, to fake attacks during the strategy game Starcraft II in order to defeat opponents, and to misrepresent their preferences in order to gain the upper hand in economic negotiations.
    While it may seem harmless if AI systems cheat at games, it can lead to “breakthroughs in deceptive AI capabilities” that can spiral into more advanced forms of AI deception in the future, Park added.

    Some AI systems have even learned to cheat tests designed to evaluate their safety, the researchers found. In one study, AI organisms in a digital simulator “played dead” in order to trick a test built to eliminate AI systems that rapidly replicate.
    “By systematically cheating the safety tests imposed on it by human developers and regulators, a deceptive AI can lead us humans into a false sense of security,” says Park.
    The major near-term risks of deceptive AI include making it easier for hostile actors to commit fraud and tamper with elections, warns Park. Eventually, if these systems can refine this unsettling skill set, humans could lose control of them, he says.
    “We as a society need as much time as we can get to prepare for the more advanced deception of future AI products and open-source models,” says Park. “As the deceptive capabilities of AI systems become more advanced, the dangers they pose to society will become increasingly serious.”
    While Park and his colleagues do not think society has the right measure in place yet to address AI deception, they are encouraged that policymakers have begun taking the issue seriously through measures such as the EU AI Act and President Biden’s AI Executive Order. But it remains to be seen, Park says, whether policies designed to mitigate AI deception can be strictly enforced given that AI developers do not yet have the techniques to keep these systems in check.
    “If banning AI deception is politically infeasible at the current moment, we recommend that deceptive AI systems be classified as high risk,” says Park.
    This work was supported by the MIT Department of Physics and the Beneficial AI Foundation. More

  • in

    AI intervention mitigates tension among conflicting ethnic groups

    Prejudice and fear have always been at the core of intergroup hostilities.
    While intergroup interaction is a prerequisite for initiating peace and stability at the junction of clashing interests, values, and cultures, the risk of further escalation precisely from direct interactions cannot be ruled out. In particular, a shortage of impartial, nonpartisan personnel to properly manage an electronic contact — or E-contact — session may cause the process to backfire and become destabilized.
    Now, a research team including Kyoto University has shown that interactive AI programs may help reduce prejudice and anxiety among historically divided ethnic groups in Afghanistan during online interactions.
    “Compared to the control group, participants in the AI intervention group showed more engagement in our study and significantly less prejudice and anxiety toward other ethnic groups,” says Sofia Sahab of KyotoU’s Graduate School of Informatics.
    In collaboration with Nagoya University, Nagoya Institute of Technology, and Hokkaido University, Sahab’s team has tested the effectiveness of using a CAI — or conversational AI — on the discussion platform D-Agree to facilitate unbiased and constructive conversations. The program ensures participants a safe, private space to talk freely, a setting that is commonly taken for granted in war-free countries.
    “Our over-decade-long work on AI agent-based consensus-building support has empirically demonstrated AI agents’ applicability in de-escalating confrontational situations,” remarks co-author Takayuki Ito, also of the informatics school.
    Sahab’s team applied a randomized controlled experiment to determine the causal effects of conversational AI facilitation in online discussions in reducing prejudice and anxiety.

    Participants from three ethnic backgrounds were divided into two groups — an AI group and a non-AI control group — to gauge the effects. As expected, the former expressed more empathy toward outside groups than participants in the control group.
    “The neutral AI agents aim to reduce risks by coordinating guided conversations as naturally as possible. By providing fair and cost-effective strategies to encourage positive interactions, we can promote lasting harmony among diverse ethnic groups,” adds Sahab.
    In the long term, the researchers are considering the potential for AI intervention beyond border conflicts to promote positive social change.
    “AI may have come at a pivotal time to aid humanity in enhancing social sustainability with CAI-mediated human interactions,” reflects Sahab. More

  • in

    Blockchain could offer a solution to the UK’s transport ticketing systems

    A new approach to transport ticketing offers a step towards an integrated, transparent system that works efficiently for both ticket providers and passengers across all modes of transport.
    Traditional ticketing systems are based on solutions that are vulnerable to issues including a lack of transferability across multi-modal transport networks, and an inability to adapt to policy changes and new technologies.
    Experts at the University of Birmingham have outlined a system that offers a new foundation for all ticketing providers. In a new paper, published in IET Blockchain, STUB (System for Ticketing Ubiquity within Blockchains) brings together the capabilities of two versatile technologies — blockchain and ontology.
    A blockchain is a distributed ledger that that records transactions in a way that ensures security, transparency, and immutability. An ontology is formal representation of knowledge within a domain and the relationships between those concepts, used to model and manage complex information systems.
    The researchers showed how both technologies could be combined to create a robust, transparent, and interconnected data framework that ensures consistent and reliable shared knowledge.
    Utilising these data structures, ticket providers can sell and validate tokenised tickets on the blockchain, ensuring universal accessibility across all providers. The integration of ontology allows providers to capture and share contextual information about the transport network, enabling providers to offer comprehensive data about routes, schedules, and availability, thereby streamlining the ticketing process.
    Lead author, Dr Joe Preece, said: “Transport systems around the world are becoming increasingly interconnected. Ticketing systems are key to this and there is a growing interest in the use of smarter transport ticketing that harnesses emerging technologies to overcome the limitations of traditional systems.

    “The system we have devised enables ticket providers to operate in a more transparent, flexible environment, that will ultimately offer passengers a more user-friendly experience.
    “STUB’s approach is not to be a single central data platform with transport policy baked-in, but instead to be a policy-agnostic approach that empowers existing ticket providers and technologies to share core ticketing data and to build new solutions on top of.
    “In essence, this may provide a modernised approach to the Rail Settlement Plan, that enables multi-modal ticketing, automated revenue and refund allocation, and dynamic fare pricing, whilst retaining the technologies in the sector that already work well.
    The next step for the team will be to set up a pilot scheme for the technology in a regional transport network, to demonstrate its efficacy, and to get feedback from ticket operators and passengers.
    “A big challenge to implementation will be the integration with existing ticketing infrastructure to work alongside the current standardised approaches whilst we scale up the technology. Setting up a successful pilot will be key to breaking down these barriers.” More

  • in

    AI knowledge gets your foot in the door

    Employers are significantly more likely to offer job interviews and higher salaries to graduates with experience of artificial intelligence, according to new research published in the journal Oxford Economic Papers.
    Researchers from Anglia Ruskin University (ARU) conducted an experiment by submitting CVs for job vacancies from British 21-year-old applicants who held a 2:1 degree. Some of the applicants possessed AI capital — they had studied an ‘AI in business’ module — and this was mentioned in their cover letter for the application.
    A matched pair of male applicants, one with AI capital and the other without, submitted applications, resulting in a total of 1,360 applications from male applicants to 680 UK companies. A total of 1,316 similarly matched applications from female applicants were sent to 658 firms.
    Male applicants with AI capital received an interview invitation in 54% of cases, whereas male applicants without AI capital were invited to interview in 28% of cases.
    Female applicants with AI capital received an interview invitation in 50% of cases, whereas female applicants without AI capital received one in 32% of cases.
    In large firms, applicants with AI capital were 36 percentage points more likely to be invited to an interview than in small-medium sized firms.
    Male applicants with AI qualifications were shortlisted for jobs offering wages that were, on average, 12% higher than those for male applicants without AI capital, while female applicants with AI qualifications were offered interviews for jobs offering wages that were, on average, 13% higher than without AI capital.

    Lead author Professor Nick Drydakis, Professor of Economics at Anglia Ruskin University (ARU), said: “In the UK, AI is causing dramatic shifts in the workforce, and firms need to respond to these demands by upgrading their workforces through enhancing their AI skill levels.
    “Our study clearly indicates that employers value AI knowledge and skills among job applicants. Those applicants with AI capital were significantly more likely to be invited to interview and were also more likely to have access to better paid jobs.
    “Job applicants with AI capital might possess the knowledge, skills and capabilities related to data analysis, data-driven decision-making, creativity, innovation, and effective communication, among other factors. These skills can enhance business operations, making them more efficient and potentially contributing to increased productivity within a firm.
    “Larger firms particularly valued AI capital, possibly because they tend to undergo more AI-based structural technological transformations and have greater capacity for innovation.” More

  • in

    Learning the imperfections: New approach to using neural networks for low-power digital pre-distortion (DPD) in mmWave systems

    Engineers at Tokyo Institute of Technology (Tokyo Tech) have demonstrated a simple computational approach for improving the linearization of power amplifiers (PA), such as those used in mmWave systems and other telecommunication systems. The proposed technique involves training small neural networks to directly estimate the coefficients of a polynomial for digital pre-distortion (DPD) based on their frequency response during calibration sweeps.
    In the world around us, a quiet but very important evolution has been taking place in engineering over the last decades. As technology evolves, it becomes increasingly clear that building devices that are physically as close as possible to being perfect is not always the right approach. That’s because it often leads to designs that are very expensive, complex to build, and power-hungry. Engineers, especially electronic engineers, have become very skilled in using highly imperfect devices in ways that allow them to behave close enough to the ideal case to be successfully applicable. Historically, a well-known example is that of disk drives, where advances in control systems have made it possible to achieve incredible densities while using electromechanical hardware littered with imperfections, such as nonlinearities and instabilities of various kinds.
    A similar problem has been emerging for radio communication systems. As the carrier frequencies keep increasing and channel packing becomes more and more dense, the requirements in terms of linearity for the radio-frequency power amplifiers (RF-PAs) used in telecommunication systems have been getting stringent. Traditionally, the best linearity is provided by designs known as “Class A,” which sacrifice great amounts of power to maintain operation in a region where transistors respond in the most linear possible way. On the other hand, highly energy-efficient designs are affected by nonlinearities that render them unstable without suitable correction. The situation has been getting worse because the modulation systems used by the latest cellular systems have a very high power ratio between the lowest- and highest-intensity symbols. Specific RF-PA types such as Doherty amplifiers are highly suitable and power-efficient, but their native non-linearity is not acceptable.
    Over the last two decades, high-speed digital signal processing has become widely available, economical, and power-efficient, leading to the emergence of algorithms allowing the real-time correction of amplifier non-linearities through intentionally “distorting” the signal in a way that compensates the amplifier’s physical response. These algorithms have become collectively known as digital pre-distortion (DPD), and represent an evolution of earlier implementations of the same approach in the analog domain. Throughout the years, many types of DPD algorithms have been proposed, typically involving real-time feedback from the amplifier through a so-called “observation signal,” and fairly intense calculations. While this approach has been instrumental to the development of third- and fourth-generation cellular networks (3G, 4G), it falls short of the emerging requirements for fifth-generation (5G) networks, due to two reasons. First, dense antenna arrays are subject to significant disturbances between adjacent elements, known as cross-talking, making it difficult to obtain clean observation signals and causing instability. The situation is made considerably worse by the use of ever-increasing frequencies. Second, dense arrays of antennas require very low-power solutions, and this is not compatible with the idea of complex processing taking place for each individual element.
    “We came up with a solution to this problem starting from two well-established mathematical facts. First, when a non-linearity is applied to a sinusoidal signal, it distorts it, leading to the appearance of new frequencies. Their intensity provides a sort of signature, that, if the non-linearity is a polynomial, is almost univocally associated with a set of coefficients. Second, multi-layer neural networks, of the early kinds, introduced decades ago, are universal function approximations, therefore, are capable of learning such an association, and inverting it,” explains Prof. Ludovico Minati, leading inventor of the patent on which the study is based and formerly a specially-appointed associate professor at Tokyo Tech.
    The most recent types of RF-PAs based on CMOS technology, even when they are heavily nonlinear, tend to have a relatively simple response, free from memory effects. “This implies that the DPD problem can be reduced to finding the coefficients of a suitable polynomial, in a way that is quick and stable enough for real-world operation,” explains Dr. Aravind Tharayil Narayanan, lead author of the study. Through a dedicated hardware architecture, the engineers at the Nano Sensing Unit of Tokyo Tech were able to implement a system that automatically determines the polynomial coefficients for DPD, based on a limited amount of data that could be acquired within the course of a few milliseconds. Performing calibration in the “foreground,” that is, one path at a time, reduces issues related to cross-talk and greatly simplifies the design. While there is no observation signal needed, the calibration can adjust itself to varying conditions through the inputs of additional signals, such as die temperature, power supply voltage, and settings of the phase shifters and couplers connecting the antenna. While standards compliance may pose some limitations, the approach is in principle widely applicable.
    “Because there is very limited processing happening in real-time, the hardware complexity is truly reduced to a minimum, and the power efficiency is maximized. Our results prove that this approach could in principle be sufficiently effective to support the most recent emerging standards. Another very convenient feature is that a considerable amount of hardware can be shared between elements, which is particularly convenient in dense array designs,” added Prof. Hiroyuki Ito, head of the Nano Sensing Unit of TokyoTech where the technology was developed. As a part of an industry-academia collaboration effort funded by NEDO, the authors were able to test the concept on realistic, leading-edge hardware operating at 28 GHz provided by Fujitsu Limited, working in close collaboration with a team of engineers in the Product Planning Division of the Mobile System Business Unit. Future work will include large-scale implementation using dedicated ASIC designs, detailed standards compliance analysis and realistic benchmarking on the field under a variety of settings.
    An international PCT application for the methodology and design has been filed. More

  • in

    Good vibrations: New tech may lead to smaller, more powerful wireless devices

    What if your earbuds could do everything your smartphone can do already, except better? What sounds a bit like science fiction may actually not be so far off. A new class of synthetic materials could herald the next revolution of wireless technologies, enabling devices to be smaller, require less signal strength and use less power.
    The key to these advances lies in what experts call phononics, which is similar to photonics. Both take advantage of similar physical laws and offer new ways to advance technology. While photonics takes advantage of photons — or light — phononics does the same with phonons, which are the physical particles that transmit mechanical vibrations through a material, akin to sound, but at frequencies much too high to hear.
    In a paper published in Nature Materials, researchers at the University of Arizona Wyant College of Optical Sciences and Sandia National Laboratories report clearing a major milestone toward real-world applications based on phononics. By combining highly specialized semiconductor materials and piezoelectric materials not typically used together, the researchers were able to generate giant nonlinear interactions between phonons. Together with previous innovations demonstrating amplifiers for phonons using the same materials, this opens up the possibility of making wireless devices such as smartphones or other data transmitters smaller, more efficient and more powerful.
    “Most people would probably be surprised to hear that there are something like 30 filters inside their cell phone whose sole job it is to transform radio waves into sound waves and back,” said the study’s senior author, Matt Eichenfield, who holds a joint appointment at the UArizona College of Optical Sciences and Sandia National Laboratories in Albuquerque, New Mexico.
    Part of what are known as front-end processors, these piezoelectric filters, made on special microchips, are necessary to convert sound and electronic waves multiple times each time a smartphone receives or sends data, he said. Because these can’t be made out of the same materials, such as silicon, as the other critically important chips in the front-end processor, the physical size of your device is much bigger than it needs to be, and along the way, there are losses from going back and forth between radio waves and sound waves that add up and degrade the performance, Eichenfield said.
    “Normally, phonons behave in a completely linear fashion, meaning they don’t interact with each other,” he said. “It’s a bit like shining one laser pointer beam through another; they just go through each other.”
    Nonlinear phononics refers to what happens in special materials when the phonons can and do interact with each other, Eichenfield said. In the paper, the researchers demonstrated what he calls “giant phononic nonlinearities.” The synthetic materials produced by the research team caused the phonons to interact with each other much more strongly than in any conventional material.

    “In the laser pointer analogy, this would be like changing the frequency of the photons in the first laser pointer when you turn on the second,” he said. “As a result, you’d see the beam from the first one changing color.”
    With the new phononics materials, the researchers demonstrated that one beam of phonons can, in fact, change the frequency of another beam. What’s more, they showed that phonons can be manipulated in ways that could only be realized with transistor-based electronics — until now.
    The group has been working toward the goal of making all of the components needed for radio frequency signal processors using acoustic wave technologies instead of transistor-based electronics on a single chip, in a way that’s compatible with standard microprocessor manufacturing, and the latest publication proves that it can be done. Previously, the researchers succeeded in making acoustic components including amplifiers, switches and others. With the acoustic mixers described in the latest publication, they have added the last piece of the puzzle.
    “Now, you can point to every component in a diagram of a radiofrequency front-end processor and say, ‘Yeah, I can make all of these on one chip with acoustic waves,'” Eichenfield said. “We’re ready to move on to making the whole shebang in the acoustic domain.”
    Having all the components needed to make a radio frequency front end on a single chip could shrink devices such as cell phones and other wireless communication gadgets by as much as a factor of a 100, according to Eichenfield.
    The team accomplished its proof of principle by combining highly specialized materials into microelectronics-sized devices through which they sent acoustic waves. Specifically, they took a silicon wafer with a thin layer of lithium niobate — a synthetic material used extensively in piezoelectronic devices and cell phones — and added an ultra-thin layer (fewer than 100 atoms thick) of a semiconductor containing indium gallium arsenide.

    “When we combined these materials in just the right way, we were able to experimentally access a new regime of phononic nonlinearity,” said Sandia engineer Lisa Hackett, the lead author on the paper. “This means we have a path forward to inventing high-performance tech for sending and receiving radio waves that’s smaller than has ever been possible.”
    In this setup, acoustic waves moving through the system behave in nonlinear ways when they travel through the materials. This effect can be used to change frequencies and encode information. A staple of photonics, nonlinear effects have long been used to make things like invisible laser light into visible laser pointers, but taking advantage of nonlinear effects in phononics has been hindered by limitations in technology and materials. For example, while lithium niobate is one of the most nonlinear phononic materials known, its usefulness for technical applications is hindered by the fact that those nonlinearities are very weak when used on its own.
    By adding the indium-gallium arsenide semiconductor, Eichenfield’s group created an environment in which the acoustic waves traveling through the material influence the distribution of electrical charges in the indium gallium arsenide semiconductor film, causing the acoustic waves to mix in specific ways that can be controlled, opening up the system to various applications.
    “The effective nonlinearity you can generate with these materials is hundreds or even thousands of times larger than was possible before, which is crazy,” Eichenfield said. “If you could do the same for nonlinear optics, you would revolutionize the field.”
    With physical size being one of the fundamental limitations of current, state-of-the-art radiofrequency processing hardware, the new technology could open the door to electronic devices that are even more capable than their current counterparts, according to the authors. Communication devices that take virtually no space, have better signal coverage and longer battery life, are on the horizon. More