More stories

  • in

    When algorithms go bad: How consumers respond

    Researchers from University of Texas-Austin and Copenhagen Business School published a new paper in the Journal of Marketing that offers actionable guidance to managers on the deployment of algorithms in marketing contexts.
    The study, forthcoming in the Journal of Marketing, is titled “When Algorithms Fail: Consumers’ Responses to Brand Harm Crises Caused by Algorithm Errors” and is authored by Raji Srinivasan and Gulen Sarial-Abi.
    Marketers increasingly rely on algorithms to make important decisions. A perfect example is the Facebook News Feed. You do not know why some of your posts show up on some people’s News Feeds or not, but Facebook does. Or how about Amazon recommending books and products for you? All of these are driven by algorithms. Algorithms are software and are far from perfect. Like any software, they can fail, and some do fail spectacularly. Add in the glare of social media and a small glitch can quickly turn into a brand harm crisis, and a massive PR nightmare. Yet, we know little about consumers’ responses to brands following such brand harm crises.
    First, the research team finds that consumers penalize brands less when an algorithm (vs. human) causes an error that causes a brand harm crisis. In addition, consumers’ perceptions of the algorithm’s lower agency for the error and resultant lower responsibility for the harm caused mediate their less negative responses to a brand following such a crisis.
    Second, when the algorithm is more humanized — when it is anthropomorphized (e.g., Alexa, Siri) (vs. not) or machine learning (vs. not), it is used in a subjective (vs. objective) task, or an interactive (vs. non-interactive) task — consumers’ responses to the brand are more negative following a brand harm crisis caused by an algorithm error. Srinivasan says that “Marketers must be aware that in contexts where the algorithm appears to be more human, it would be wise to have heightened vigilance in the deployment and monitoring of algorithms and provides resources for managing the aftermath of brand harm crises caused by algorithm errors.”
    This study also generates insights about how to manage the aftermath of brand harm crises caused by algorithm errors. Managers can highlight the role of the algorithm and the lack of agency of the algorithm for the error, which may reduce consumers’ negative responses to the brand. However, highlighting the role of the algorithm will consumers’ negative responses to the brand for an anthropomorphized algorithm, a machine learning algorithm, or if the algorithm error occurs in a subjective or in an interactive task, all of which tend to humanize the algorithm.
    Finally, insights indicate that marketers should not publicize human supervision of algorithms (which may actually be effective in fixing the algorithm) in communications with customers following brand harm crises caused by algorithm errors. However, they should publicize the technological supervision of the algorithm when they use it. The reason? Consumers are less negative when there is technological supervision of the algorithm following a brand harm crisis.
    “Overall, our findings suggest that people are more forgiving of algorithms used in algorithmic marketing when they fail than they are of humans. We see this as a silver lining to the growing usage of algorithms in marketing and their inevitable failures in practice,” says Sarial-Abi.
    Story Source:
    Materials provided by American Marketing Association. Original written by Matt Weingarden. Note: Content may be edited for style and length. More

  • in

    Loan applications processed around midday more likely to be rejected

    Bank credit officers are more likely to approve loan applications earlier and later in the day, while ‘decision fatigue’ around midday is associated with defaulting to the safer option of saying no.
    These are the findings of a study by researchers in Cambridge’s Department of Psychology, published today in the journal Royal Society Open Science.
    Decision fatigue is the tiredness caused by having to make difficult decisions over a long period. Previous studies have shown that people suffering from decision fatigue tend to fall back on the ‘default decision’: choosing whatever option is easier or seems safer.
    The researchers looked at the decisions made on 26,501 credit loan applications by 30 credit officers of a major bank over a month. The officers were making decisions on ‘restructuring requests’: where the customer already has a loan but is having difficulties paying it back, so asks the bank to adjust the repayments.
    By studying decisions made at a bank, the researchers could calculate the economic cost of decision fatigue in a specific context — the first time this has been done. They found the bank could have collected around an extra $500,000 in loan repayments if all decisions had been made in the early morning.
    “Credit officers were more willing to make the difficult decision of granting a customer more lenient loan repayment terms in the morning, but by midday they showed decision fatigue and were less likely to agree to a loan restructuring request. After lunchtime they probably felt more refreshed and were able to make better decisions again,” said Professor Simone Schnall in the University of Cambridge’s Department of Psychology, senior author of the report.
    Decisions on loan restructuring requests are cognitively demanding: credit officers have to weigh up the financial strength of the customer against risk factors that reduce the likelihood of repayment. Errors can be costly to the bank. Approving the request results in a loss relative to the original payment plan, but if the restructuring succeeds, the loss is significantly smaller than if the loan is not repaid at all.
    The study found that customers whose restructuring requests were approved were more likely to repay their loan than if they were instructed to stick to the original repayment terms. Credit officers’ tendency to decline more requests around lunchtime was associated with a financial loss for the bank.
    “Even decisions we might assume are very objective and driven by specific financial considerations are influenced by psychological factors. This is clear evidence that regular breaks during working hours are important for maintaining high levels of performance,” said Tobias Baer, a researcher in the University of Cambridge’s Department of Psychology and first author of the report.
    Modern work patterns have been characterised by extended hours and higher work volume. The results suggest that cutting down on prolonged periods of intensive mental exertion may make workers more productive.
    Story Source:
    Materials provided by University of Cambridge. The original story is licensed under a Creative Commons License. Note: Content may be edited for style and length. More

  • in

    New application of AI just removed one of the biggest roadblocks in astrophysics

    Using a bit of machine learning magic, astrophysicists can now simulate vast, complex universes in a thousandth of the time it takes with conventional methods. The new approach will help usher in a new era in high-resolution cosmological simulations, its creators report in a study published online May 4 in Proceedings of the National Academy of Sciences.
    “At the moment, constraints on computation time usually mean we cannot simulate the universe at both high resolution and large volume,” says study lead author Yin Li, an astrophysicist at the Flatiron Institute in New York City. “With our new technique, it’s possible to have both efficiently. In the future, these AI-based methods will become the norm for certain applications.”
    The new method developed by Li and his colleagues feeds a machine learning algorithm with models of a small region of space at both low and high resolutions. The algorithm learns how to upscale the low-res models to match the detail found in the high-res versions. Once trained, the code can take full-scale low-res models and generate ‘super-resolution’ simulations containing up to 512 times as many particles.
    The process is akin to taking a blurry photograph and adding the missing details back in, making it sharp and clear.
    This upscaling brings significant time savings. For a region in the universe roughly 500 million light-years across containing 134 million particles, existing methods would require 560 hours to churn out a high-res simulation using a single processing core. With the new approach, the researchers need only 36 minutes.
    The results were even more dramatic when more particles were added to the simulation. For a universe 1,000 times as large with 134 billion particles, the researchers’ new method took 16 hours on a single graphics processing unit. Existing methods would take so long that they wouldn’t even be worth running without dedicated supercomputing resources, Li says. More

  • in

    New graphite-based sensor technology for wearable medical devices

    Researchers at AMBER, the SFI Centre for Advanced Materials and BioEngineering Research, and from Trinity’s School of Physics, have developed next-generation, graphene-based sensing technology using their innovative G-Putty material.
    The team’s printed sensors are 50 times more sensitive than the industry standard and outperform other comparable nano-enabled sensors in an important metric seen as a game-changer in the industry: flexibility.
    Maximising sensitivity and flexibility without reducing performance makes the teams’ technology an ideal candidate for the emerging areas of wearable electronics and medical diagnostic devices.
    The team — led by Professor Jonathan Coleman from Trinity’s School of Physics, one of the world’s leading nanoscientists — demonstrated that they can produce a low-cost, printed, graphene nanocomposite strain sensor.
    They developed a method to formulate G-Putty based inks that can be printed as a thin-film onto elastic substrates, including band-aids, and attached easily to the skin.
    The team developed a method to formulate G-Putty based inks that can be printed as a thin-film onto elastic substrates, including band-aids, and attached easily to the skin. More

  • in

    Little to no increase in association between adolescents' mental health problems and digital tech

    With the explosion in digital entertainment options over the past several decades and the more recent restrictions on outdoor and in-person social activities, parents may worry that excessive engagement with digital technology could have long-term effects on their children’s mental health.
    A new study published in the journal Clinical Psychological Science, however, found little evidence for an increased association between adolescents’ technology engagement and mental health problems over the past 30 years. The data did not consistently support the suggestion that the technologies we worry about most (e.g., smartphones) are becoming more harmful.
    The new study, which included 430,000 U.K. and U.S. adolescents, investigated the links between social media use and depression, emotional problems, and conduct problems. It also examined the associations between television viewing and suicidality, depression, emotional problems, and conduct problems. Finally, the study explored the association between digital device use and suicidality.
    Of the eight associations examined in this research, only three showed some change over time. Social media use and television viewing became less strongly associated with depression. In contrast, social media’s association with emotional problems did increase, although only slightly. The study found no consistent changes in technology engagement’s associations with conduct problems or suicidality.
    “If we want to understand the relationship between tech and well-being today, we need to first go back and look at historic data — as far back as when parents were concerned too much TV would give their kids square eyes — in order to bring the contemporary concerns we have about newer technologies into focus,” said Matti Vuorre, a postdoctoral researcher at the Oxford Internet Institute and lead author on the paper.
    The study also highlighted key factors preventing scientists from conclusively determining how technology use relates to mental health.
    “As more data accumulates on adolescents’ use of emerging technologies, our knowledge of them and their effects on mental health will become more precise,” said Andy Przybylski, director of research at Oxford Internet Institute and senior author on the study. “So, it’s too soon to draw firm conclusions about the increasing, or declining, associations between social media and adolescent mental health, and it is certainly way too soon to be making policy or regulation on this basis.
    “We need more transparent and credible collaborations between scientists and technology companies to unlock the answers. The data exists within the tech industry; scientists just need to be able to access it for neutral and independent investigation,” Przybylski said.
    Story Source:
    Materials provided by Association for Psychological Science. Note: Content may be edited for style and length. More

  • in

    New synapse-like phototransistor

    Researchers at the U.S. Department of Energy’s National Renewable Energy Laboratory (NREL) developed a breakthrough in energy-efficient phototransistors. Such devices could eventually help computers process visual information more like the human brain and be used as sensors in things like self-driving vehicles.
    The structures rely on a new type of semiconductor — metal-halide perovskites — which have proven to be highly efficient at converting sunlight into electrical energy and shown tremendous promise in a range of other technologies.
    “In general, these perovskite semiconductors are a really unique functional system with potential benefits for a number of different technologies,” said Jeffrey Blackburn, a senior scientist at NREL and co-author of a new paper outlining the research. “NREL became interested in this material system for photovoltaics, but they have many properties that could be applied to whole different areas of science.”
    In this case, the researchers combined perovskite nanocrystals with a network of single-walled carbon nanotubes to create a material combination they thought might have interesting properties for photovoltaics or detectors. When they shined a laser at it, they found a surprising electrical response.
    “What normally would happen is that, after absorbing the light, an electrical current would briefly flow for a short period of time,” said Joseph Luther, a senior scientist and co-author. “But in this case, the current continued to flow and did not stop for several minutes even when the light was switched off.”
    Such behavior is referred to as “persistent photoconductivity” and is a form of “optical memory,” where the light energy hitting a device can be stored in “memory” as an electrical current. The phenomenon can also mimic synapses in the brain that are used to store memories. Often, however, persistent photoconductivity requires low temperatures and/or high operating voltages, and the current spike would only last for small fractions of a second. In this new discovery, the persistent photoconductivity produces an electrical current at room temperature and flows current for more than an hour after the light is switched off. In addition, only low voltages and low light intensities were found to be needed, highlighting the low energy needed to store memory. More

  • in

    Algorithms improve how we protect our data

    Daegu Gyeongbuk Institute of Science and Technology (DGIST) scientists in Korea have developed algorithms that more efficiently measure how difficult it would be for an attacker to guess secret keys for cryptographic systems. The approach they used was described in the journal IEEE Transactions on Information Forensics and Security and could reduce the computational complexity needed to validate encryption security.
    “Random numbers are essential for generating cryptographic information,” explains DGIST computer scientist Yongjune Kim, who co-authored the study with Cyril Guyot and Young-Sik Kim. “This randomness is crucial for the security of cryptographic systems.”
    Cryptography is used in cybersecurity for protecting information. Scientists often use a metric, called ‘min-entropy’, to estimate and validate how good a source is at generating the random numbers used to encrypt data. Data with low entropy is easier to decipher, whereas data with high entropy is much more difficult to decode. But it is difficult to accurately estimate the min-entropy for some types of sources, leading to underestimations.
    Kim and his colleagues developed an offline algorithm that estimates min-entropy based on a whole data set, and an online estimator that only needs limited data samples. The accuracy of the online estimator improves as the amount of data samples increases. Also, the online estimator does not need to store entire datasets, so it can be used in applications with stringent memory, storage and hardware constraints, like Internet-of-things devices.
    “Our evaluations showed that our algorithms can estimate min-entropy 500 times faster than the current standard algorithm while maintaining estimation accuracy,” says Kim.
    Kim and his colleagues are working on improving the accuracy of this and other algorithms for estimating entropy in cryptography. They are also investigating how to improve privacy in machine learning applications.
    Story Source:
    Materials provided by DGIST (Daegu Gyeongbuk Institute of Science and Technology). Note: Content may be edited for style and length. More

  • in

    Complex shapes of photons to boost future quantum technologies

    As the digital revolution has now become mainstream, quantum computing and quantum communication are rising in the consciousness of the field. The enhanced measurement technologies enabled by quantum phenomena, and the possibility of scientific progress using new methods, are of particular interest to researchers around the world.
    Recently two researchers at Tampere University, Assistant Professor Robert Fickler and Doctoral Researcher Markus Hiekkamäki, demonstrated that two-photon interference can be controlled in a near-perfect way using the spatial shape of the photon. Their findings were recently published in the journal Physical Review Letters.
    “Our report shows how a complex light-shaping method can be used to make two quanta of light interfere with each other in a novel and easily tuneable way,” explains Markus Hiekkamäki.
    Single photons (units of light) can have highly complex shapes that are known to be beneficial for quantum technologies such as quantum cryptography, super-sensitive measurements, or quantum-enhanced computational tasks. To make use of these so-called structured photons, it is crucial to make them interfere with other photons.
    “One crucial task in essentially all quantum technological applications is improving the ability to manipulate quantum states in a more complex and reliable way. In photonic quantum technologies, this task involves changing the properties of a single photon as well as interfering multiple photons with each other;” says Robert Fickler, who leads the Experimental Quantum Optics group at the university.
    Linear optics bring promising solutions to quantum communications
    The demonstrated development is especially interesting from the point of view of high-dimensional quantum information science, where more than a single bit of quantum information is used per carrier. These more complex quantum states not only allow the encoding of more information onto a single photon but are also known to be more noise-resistant in various settings.
    The method presented by the research duo holds promise for building new types of linear optical networks. This paves the way for novel schemes of photonic quantum-enhanced computing.
    “Our experimental demonstration of bunching two photons into multiple complex spatial shapes is a crucial next step for applying structured photons to various quantum metrological and informational tasks,” continues Markus Hiekkamäki.
    The researchers now aim at utilizing the method for developing new quantum-enhanced sensing techniques, while exploring more complex spatial structures of photons and developing new approaches for computational systems using quantum states.
    “We hope that these results inspire more research into the fundamental limits of photon shaping. Our findings might also trigger the development of new quantum technologies, e.g. improved noise-tolerant quantum communication or innovative quantum computation schemes, that benefit from such high-dimensional photonic quantum states,” adds Robert Fickler.
    Story Source:
    Materials provided by Tampere University. Note: Content may be edited for style and length. More