More stories

  • in

    Researchers take a stand on algorithm design for job centers: Landing a job isn't always the right goal

    Imagine that you are a job consultant. You are sitting across from your client, an unemployed individual.
    After locating them in the system, up pops the following text on the computer screen; ‘increased risk of long-term unemployment’.
    Such assessments are made by an algorithm that, via data on the citizen’s gender, age, residence, education, income, ethnicity, history of illness, etc., spits out an estimate of how long the person — compared to other people from similar backgrounds — is expected to remain in the system and receive benefits.
    But is it reasonable to characterize individual citizens on the basis of what those with similar backgrounds have managed in their job searches? According to a new study from the University of Copenhagen, no.
    “You have to understand that people are human. We get older, become ill and experience tragedies and triumphs. So instead of trying to predict risks for individuals, we ought to look at implementing improved and more transparent courses in the job center arena,” says Naja Holten Møller, an assistant professor at the Department of Computer Science, and one of the researchers behind the study.
    Together with two colleagues from the same department, Professor Thomas Hildebrandt and Professor Irina Shklovski, Møller has explored possible alternatives to using algorithms that predict job readiness for unemployed individuals as well as the ethical aspects that may arise.

    advertisement

    “We studied how to develop algorithms in an ethical and responsible manner, where the goals determined for the algorithm make sense to job consultants as well. Here, it is crucial to find a balance, where the unemployed individual’s current situation is assessed by a job consultant, while at the same time, one learns from similar trajectories using an algorithm,” says Naja Holten Møller.
    Job consultants need to help create the algorithm
    The use of job search algorithms is not a well-thought scenario. Nevertheless, the Danish Agency for Labour Market and Recruitment has already rolled out this type of algorithm to predict the risk of long-term unemployment among the citizenry — despite criticism from several data law experts.
    “Algorithms used in the public sphere must not harm citizens, obviously. By challenging the scenario and the very assumption that the goal of an unemployed person at a job centre is always to land a job, we are better equipped to understand ethical challenges. Unemployment can have many causes. Thus, the study shows that a quick clarification of time frames for the most vulnerable citizens may be a better goal. By doing so, we can avoid the deployment of algorithms that do great harm,” explains Naja Holten Møller.
    The job consultants surveyed in the study expressed concern about how the algorithm’s assessment would affect their own judgment, specifically in relation to vulnerable citizens.

    advertisement

    “A framework must be established in which job consultants can have a real influence on the underlying targets that guide the algorithm. Accomplishing this is difficult and will take time, but is crucial for the outcome. At the same time, it should be kept in mind that algorithms which help make decisions can greatly alter the work of job consultants. Thus, an ethical approach involves considering their advice,” explains Naja Holten Møller.
    We must consider the ethical aspects
    While algorithms can be useful for providing an idea of, for example, how long an individual citizen might expect to be unemployed, this does not mean that it is ethically justifiable to use such predictions in job centers, points out Naja Holten Møller.
    “There is a dream that the algorithm can identify patterns that others are oblivious to. Perhaps it can seem that, for those who have experienced a personal tragedy, a particular path through the system is best. For example, the algorithm could determine that because you’ve been unemployed due to illness or a divorce, your ability to avoid long-term unemployment depends on such and such,” she says, concluding:
    “But what will we do with this information, and can it be deployed in a sensible way to make better decisions? Job consultants are often able to assess for themselves whether a person is likely to be unemployed for an extended period of time. These assessments are shaped by in-person meetings, professionalism and experience — and it is here, within these meetings, that an ethical development of new systems for the public can best be spawned.” More

  • in

    Graphene-based memory resistors show promise for brain-based computing

    As progress in traditional computing slows, new forms of computing are coming to the forefront. At Penn State, a team of engineers is attempting to pioneer a type of computing that mimics the efficiency of the brain’s neural networks while exploiting the brain’s analog nature.
    Modern computing is digital, made up of two states, on-off or one and zero. An analog computer, like the brain, has many possible states. It is the difference between flipping a light switch on or off and turning a dimmer switch to varying amounts of lighting.
    Neuromorphic or brain-inspired computing has been studied for more than 40 years, according to Saptarshi Das, the team leader and Penn State assistant professor of engineering science and mechanics. What’s new is that as the limits of digital computing have been reached, the need for high-speed image processing, for instance for self-driving cars, has grown. The rise of big data, which requires types of pattern recognition for which the brain architecture is particularly well suited, is another driver in the pursuit of neuromorphic computing.
    “We have powerful computers, no doubt about that, the problem is you have to store the memory in one place and do the computing somewhere else,” Das said.
    The shuttling of this data from memory to logic and back again takes a lot of energy and slows the speed of computing. In addition, this computer architecture requires a lot of space. If the computation and memory storage could be located in the same space, this bottleneck could be eliminated.
    “We are creating artificial neural networks, which seek to emulate the energy and area efficiencies of the brain,” explained Thomas Shranghamer, a doctoral student in the Das group and first author on a paper recently published in Nature Communications. “The brain is so compact it can fit on top of your shoulders, whereas a modern supercomputer takes up a space the size of two or three tennis courts.”
    Like synapses connecting the neurons in the brain that can be reconfigured, the artificial neural networks the team is building can be reconfigured by applying a brief electric field to a sheet of graphene, the one-atomic-thick layer of carbon atoms. In this work they show at least 16 possible memory states, as opposed to the two in most oxide-based memristors, or memory resistors.
    “What we have shown is that we can control a large number of memory states with precision using simple graphene field effect transistors,” Das said.
    The team thinks that ramping up this technology to a commercial scale is feasible. With many of the largest semiconductor companies actively pursuing neuromorphic computing, Das believes they will find this work of interest.
    The Army Research Office supported this work. The team has filed for a patent on this invention.

    Story Source:
    Materials provided by Penn State. Original written by Walt Mills. Note: Content may be edited for style and length. More

  • in

    Physicists circumvent centuries-old theory to cancel magnetic fields

    A team of scientists including two physicists at the University of Sussex has found a way to circumvent a 178-year old theory which means they can effectively cancel magnetic fields at a distance. They are the first to be able to do so in a way which has practical benefits.
    The work is hoped to have a wide variety of applications. For example, patients with neurological disorders such as Alzheimer’s or Parkinson’s might in future receive a more accurate diagnosis. With the ability to cancel out ‘noisy’ external magnetic fields, doctors using magnetic field scanners will be able to see more accurately what is happening in the brain.
    The study “Tailoring magnetic fields in inaccessible regions” is published in Physical Review Letters. It is an international collaboration between Dr Mark Bason and Jordi Prat-Camps at the University of Sussex, and Rosa Mach-Batlle and Nuria Del-Valle from the Universitat Autonoma de Barcelona and other institutions.
    “Earnshaw’s Theorem” from 1842 limits the ability to shape magnetic fields. The team were able to calculate an innovative way to circumvent this theory in order to effectively cancel other magnetic fields which can confuse readings in experiments.
    In practical terms, they achieved this through creating a device comprised of a careful arrangement of electrical wires. This creates additional fields and so counteracts the effects of the unwanted magnetic field. Scientists have been struggling with this challenge for years but now the team has found a new strategy to deal with these problematic fields. While a similar effect has been achieved at much higher frequencies, this is the first time it has been achieved at low frequencies and static fields — such as biological frequencies — which will unlock a host of useful applications.
    Other possible future applications for this work include:
    Quantum technology and quantum computing, in which ‘noise’ from exterior magnetic fields can affect experimental readings
    Neuroimaging, in which a technique called ‘transcranial magnetic stimulation’ activates different areas of the brain through magnetic fields. Using the techniques in this paper, doctors might be able to more carefully address areas of the brain needing stimulation.
    Biomedicine, to better control and manipulate nanorobots and magnetic nanoparticles that are moved inside a body by means of external magnetic fields. Potential applications for this development include improved drug delivery and magnetic hyperthermia therapies.
    Dr Rosa Mach-Batlle, the lead author on the paper from the Universitat Autonoma de Barcelona, said: “Starting from the fundamental question of whether it was possible or not to create a magnetic source at a distance, we came up with a strategy for controlling magnetism remotely that we believe could have a significant impact in technologies relying on the magnetic field distribution in inaccessible regions, such as inside of a human body.”
    Dr Mark Bason from the School of Mathematical and Physical Sciences at the University of Sussex said: “We’ve discovered a way to circumvent Earnshaw’s theorem which many people didn’t imagine was possible. As a physicist, that’s pretty exciting. But it’s not just a theoretical exercise as our research might lead to some really important applications: more accurate diagnosis for Motor Neurone Disease patients in future, for example, better understanding of dementia in the brain, or speeding the development of quantum technology.”

    Story Source:
    Materials provided by University of Sussex. Note: Content may be edited for style and length. More

  • in

    Forecasting elections with a model of infectious diseases

    Forecasting elections is a high-stakes problem. Politicians and voters alike are often desperate to know the outcome of a close race, but providing them with incomplete or inaccurate predictions can be misleading. And election forecasting is already an innately challenging endeavor — the modeling process is rife with uncertainty, incomplete information, and subjective choices, all of which must be deftly handled. Political pundits and researchers have implemented a number of successful approaches for forecasting election outcomes, with varying degrees of transparency and complexity. However, election forecasts can be difficult to interpret and may leave many questions unanswered after close races unfold.
    These challenges led researchers to wonder if applying a disease model to elections could widen the community involved in political forecasting. In a paper publishing today in SIAM Review, Alexandria Volkening (Northwestern University), Daniel F. Linder (Augusta University), Mason A. Porter (University of California, Los Angeles), and Grzegorz A. Rempala (The Ohio State University) borrowed ideas from epidemiology to develop a new method for forecasting elections. The team hoped to expand the community that engages with polling data and raise research questions from a new perspective; the multidisciplinary nature of their infectious disease model was a virtue in this regard. “Our work is entirely open-source,” Porter said. “Hopefully that will encourage others to further build on our ideas and develop their own methods for forecasting elections.”
    In their new paper, the authors propose a data-driven mathematical model of the evolution of political opinions during U.S. elections. They found their model’s parameters using aggregated polling data, which enabled them to track the percentages of Democratic and Republican voters over time and forecast the vote margins in each state. The authors emphasized simplicity and transparency in their approach and consider these traits to be particular strengths of their model. “Complicated models need to account for uncertainty in many parameters at once,” Rempala said.
    This study predominantly focused on the influence that voters in different states may exert on each other, since accurately accounting for interactions between states is crucial for the production of reliable forecasts. The election outcomes in states with similar demographics are often correlated, and states may also influence each other asymmetrically; for example, the voters in Ohio may more strongly influence the voters in Pennsylvania than the reverse. The strength of a state’s influence can depend on a number of factors, including the amount of time that candidates spend campaigning there and the state’s coverage in the news. To develop their forecasting approach, the team repurposed ideas from the compartmental modeling of biological diseases. Mathematicians often utilize compartmental models — which categorize individuals into a few distinct types (i.e., compartments) — to examine the spread of infectious diseases like influenza and COVID-19. A widely-studied compartmental model called the susceptible-infected-susceptible (SIS) model divides a population into two groups: those who are susceptible to becoming sick and those who are currently infected. The SIS model then tracks the fractions of susceptible and infected individuals in a community over time, based on the factors of transmission and recovery. When an infected person interacts with a susceptible person, the susceptible individual may become infected. An infected person also has a certain chance of recovering and becoming susceptible again.
    Because there are two major political parties in the U.S., the authors employed a modified version of an SIS model with two types of infections. “We used techniques from mathematical epidemiology because they gave us a means of framing relationships between states in a familiar, multidisciplinary way,” Volkening said. While elections and disease dynamics are certainly different, the researchers treated Democratic and Republican voting inclinations as two possible kinds of “infections” that can spread between states. Undecided, independent, or minor-party voters all fit under the category of susceptible individuals. “Infection” was interpreted as adopting Democratic or Republican opinions, and “recovery” represented the turnover of committed voters to undecided ones.
    In the model, committed voters can transmit their opinions to undecided voters, but the opposite is not true. The researchers took a broad view of transmission, interpreting opinion persuasion as occurring through both direct communication between voters and more indirect methods like campaigning, news coverage, and debates. Individuals can interact and lead to other people changing their opinions both within and between states.
    To determine the values of their models’ mathematical parameters, the authors used polling data on senatorial, gubernatorial, and presidential races from HuffPost Pollster for 2012 and 2016 and RealClearPolitics for 2018. They fit the model to the data for each individual race and simulated the evolution of opinions in the year leading up to each election by tracking the fractions of undecided, Democratic, and Republican voters in each state from January until Election Day. The researchers simulated their final forecasts as if they made them on the eve of Election Day, including all of the polling data but omitting the election results.
    Despite its basis in an unconventional field for election forecasting — namely, epidemiology — the resulting model performed surprisingly well. It forecast the 2012 and 2016 U.S. races for governor, Senate, and presidential office with a similar success rate as popular analyst sites FiveThirtyEight and Sabato’s Crystal Ball. For example, the authors’ success rate for predicting party outcomes at the state level in the 2012 and 2016 presidential elections was 94.1 percent, while FiveThirtyEight had a success rate of 95.1 percent and Sabato’s Crystal Ball had a success rate of 93.1 percent. “We were all initially surprised that a disease-transmission model could produce meaningful forecasts of elections,” Volkening said.
    After establishing their model’s capability to forecast outcomes on the eve of Election Day, the authors sought to determine how early the model could create accurate forecasts. Predictions that are made in the weeks and months before Election Day are particularly meaningful, but producing early forecasts is challenging because fewer polling data are available for model training. By employing polling data from the 2018 senatorial races, the team’s model was able to produce stable forecasts from early August onward with the same success rate as FiveThirtyEight’s final forecasts for those races.
    Despite clear differences between contagion and voting dynamics, this study suggests a valuable approach for describing how political opinions change across states. Volkening is currently applying this model — in collaboration with Northwestern University undergraduate students Samuel Chian, William L. He, and Christopher M. Lee — to forecast the 2020 U.S. presidential, senatorial, and gubernatorial elections. “This project has made me realize that it’s challenging to judge forecasts, especially when some elections are decided by a vote margin of less than one percent,” Volkening said. “The fact that our model does well is exciting, since there are many ways to make it more realistic in the future. We hope that our work encourages folks to think more critically about how they judge forecasts and get involved in election forecasting themselves.” More

  • in

    Toward ultrafast computer chips that retain data even when there is no power

    Spintronic devices are attractive alternatives to conventional computer chips, providing digital information storage that is highly energy efficient and also relatively easy to manufacture on a large scale. However, these devices, which rely on magnetic memory, are still hindered by their relatively slow speeds, compared to conventional electronic chips.
    In a paper published in the journal Nature Electronics, an international team of researchers has reported a new technique for magnetization switching — the process used to “write” information into magnetic memory — that is nearly 100 times faster than state-of-the-art spintronic devices. The advance could lead to the development of ultrafast magnetic memory for computer chips that would retain data even when there is no power.
    In the study, the researchers report using extremely short, 6-picosecond electrical pulses to switch the magnetization of a thin film in a magnetic device with great energy efficiency. A picosecond is one-trillionth of a second.
    The research was led by Jon Gorchon, a researcher at the French National Centre for Scientific Research (CNRS) working at the University of Lorraine’s L’Institut Jean Lamour in France, in collaboration with Jeffrey Bokor, professor of electrical engineering and computer sciences at the University of California, Berkeley, and Richard Wilson, assistant professor of mechanical engineering and of materials science and engineering at UC Riverside. The project began at UC Berkeley when Gorchon and Wilson were postdoctoral researchers in Bokor’s lab.
    In conventional computer chips, the 0s and 1s of binary data are stored as the “on” or “off” states of individual silicon transistors. In magnetic memory, this same information can be stored as the opposite polarities of magnetization, which are usually thought of as the “up” or “down” states. This magnetic memory is the basis for magnetic hard drive memory, the technology used to store the vast amounts of data in the cloud.
    A key feature of magnetic memory is that the data is “non-volatile,” which means that information is retained even when there is no electrical power applied.

    advertisement

    “Integrating magnetic memory directly into computer chips has been a long-sought goal,” said Gorchon. “This would allow local data on-chip to be retained when the power is off, and it would enable the information to be accessed far more quickly than pulling it in from a remote disk drive.”
    The potential of magnetic devices for integration with electronics is being explored in the field of spintronics, in which tiny magnetic devices are controlled by conventional electronic circuits, all on the same chip.
    State-of-the-art spintronics is done with the so-called spin-orbit torque device. In such a device, a small area of a magnetic film (a magnetic bit) is deposited on top of a metallic wire. A current flowing through the wire leads to a flow of electrons with a magnetic moment, which is also called the spin. That, in turn, exerts a magnetic torque — called the spin-orbit torque — on the magnetic bit. The spin-orbit torque can then switch the polarity of the magnetic bit.
    State-of-the-art spin-orbit torque devices developed so far required current pulses of at least a nanosecond, or a millionth of a second, to switch the magnetic bit, while the transistors in state-of-the-art computer chips switch in only 1 to 2 picoseconds. This leads to the speed of the overall circuit being limited by the slow magnetic switching speed.
    In this study, the researchers launched the 6-picosecond-wide electrical current pulses along a transmission line into a cobalt-based magnetic bit. The magnetization of the cobalt bit was then demonstrated to be reliably switched by the spin-orbit torque mechanism.

    advertisement

    While heating by electric currents is a debilitating problem in most modern devices, the researchers note that, in this experiment, the ultrafast heating aids the magnetization reversal.
    “The magnet reacts differently to heating on long versus short time scales,” said Wilson. “When heating is this fast, only a small amount can change the magnetic properties to help reverse the magnet’s direction.”
    Indeed, preliminary energy usage estimates are incredibly promising; the energy needed in this “ultrafast” spin-orbit torque device is almost two orders of magnitude smaller than in conventional spintronic devices that operate at much longer time scales.
    “The high energy efficiency of this novel, ultrafast magnetic switching process was a big, and very welcome, surprise,” said Bokor. “Such a high-speed, low-energy spintronic device can potentially tackle the performance limitations of current processor level memory systems, and it could also be used for logic applications.”
    The experimental methods used by the researchers also offer a new way of triggering and probing spintronic phenomena at ultrafast time scales, which could help better understand the underlying physics at play in phenomena like spin-orbit torque. More

  • in

    Machine learning helps hunt for COVID-19 therapies

    Michigan State University Foundation Professor Guowei Wei wasn’t preparing machine learning techniques for a global health crisis. Still, when one broke out, he and his team were ready to help.
    The group already has one machine learning model at work in the pandemic, predicting consequences of mutations to SARS-CoV-2. Now, Wei’s team has deployed another to help drug developers on their most promising leads for attacking one of the virus’ most compelling targets. The researchers shared their research in the peer-reviewed journal Chemical Science.
    Prior to the pandemic, Wei and his team were already developing machine learning computer models — specifically, models that use what’s known as deep learning — to help save drug developers time and money. The researchers “train” their deep learning models with datasets filled with information about proteins that drug developers want to target with therapeutics. The models can then make predictions about unknown quantities of interest to help guide drug design and testing.
    Over the past three years, the Spartans’ models have been among the top performers in a worldwide competition series for computer-aided drug design known as the Drug Design Data Resource, or D3R, Grand Challenge. Then COVID-19 came.
    “We knew this was going to be bad. China shut down an entire city with 10 million people,” said Wei, who is a professor in the Departments of Mathematics as well as Electrical and Computer Engineering. “We had a technique at hand, and we knew this was important.”
    Wei and his team have repurposed their deep learning models to focus on a specific SARS-CoV-2 protein called its main protease. The main protease is a cog in the coronavirus’s protein machinery that’s critical to how the pathogen makes copies of itself. Drugs that disable that cog could thus stop the virus from replicating.

    advertisement

    What makes the main protease an even more attractive target is that it’s distinct from all known human proteases, which isn’t always the case. Drugs that attack the viral protease are thus less likely to disrupt people’s natural biochemistry.
    Another advantage of the SARS-CoV-2 main protease is that’s it’s nearly identical to that of the coronavirus responsible for the 2003 SARS outbreak. This means that drug developers and Wei’s team weren’t starting completely from scratch. They had information about the structure of the main protease and chemical compounds called protease inhibitors that interfere with the protein’s function.
    Still, gaps remained in understanding where those protease inhibitors latch onto the viral protein and how tightly. That’s where the Spartans’ deep learning models came in.
    Wei’s team used its models to predict those details for over 100 known protease inhibitors. That data also let the team rank those inhibitors and highlight the most promising ones, which can be very valuable information for labs and companies developing new drugs, Wei said.
    “In the early days of a drug discovery campaign, you might have 1,000 candidates,” Wei said. Typically, all those candidates would move to preclinical tests in animals, then maybe the most promising 10 or so can safely advance to clinical trials in humans, Wei explained.

    advertisement

    By focusing on drugs that are most attracted to the protease’s most vulnerable spots, drug developers can whittle down that list of 1,000 from the start, saving money and months, if not years, Wei said.
    “This is a way to help drug developers prioritize. They don’t have to waste resources to check every single candidate,” he said.
    But Wei also had a reminder. The team’s models do not replace the need for experimental validation, preclinical or clinical trials. Drug developers still need to prove their products are safe before providing them for patients, which can take many years.
    For that reason, Wei said, antibody treatments that resemble what immune systems produce naturally to fight the coronavirus will be most likely the first therapies approved during the pandemic. These antibodies, however, target the virus’s spike protein, rather than its main protease. Developing protease inhibitors would thus provide a welcome addition in an arsenal to fight a deadly and constantly evolving enemy.
    “If developers want to design a new set of drugs, we’ve shown basically what they need to do,” Wei said. More

  • in

    Artificial intelligence-based algorithm for the early diagnosis of Alzheimer's

    Alzheimer’s disease (AD) is a neurodegenerative disorder that affects a significant proportion of the older population worldwide. It causes irreparable damage to the brain and severely impairs the quality of life in patients. Unfortunately, AD cannot be cured, but early detection can allow medication to manage symptoms and slow the progression of the disease.
    Functional magnetic resonance imaging (fMRI) is a noninvasive diagnostic technique for brain disorders. It measures minute changes in blood oxygen levels within the brain over time, giving insight into the local activity of neurons. Despite its advantages, fMRI has not been used widely in clinical diagnosis. The reason is twofold. First, the changes in fMRI signals are so small that they are overly susceptible to noise, which can throw off the results. Second, fMRI data are complex to analyze. This is where deep-learning algorithms come into the picture.
    In a recent study published in the Journal of Medical Imaging, scientists from Texas Tech University employed machine-learning algorithms to classify fMRI data. They developed a type of deep-learning algorithm known as a convolutional neural network (CNN) that can differentiate among the fMRI signals of healthy people, people with mild cognitive impairment, and people with AD.
    CNNs can autonomously extract features from input data that are hidden to human observers. They obtain these features through training, for which a large amount of pre-classified data is needed. CNNs are predominantly used for 2D image classification, which means that four-dimensional fMRI data (three spatial and one temporal) present a challenge. fMRI data are incompatible with most existing CNN designs.
    To overcome this problem, the researchers developed a CNN architecture that can appropriately handle fMRI data with minimal pre-processing steps. The first two layers of the network focus on extracting features from the data solely based on temporal changes, without regard for 3D structural properties. Then, the three subsequent layers extract spatial features at different scales from the previously extracted time features. This yields a set of spatiotemporal characteristics that the final layers use to classify the input fMRI data from either a healthy subject, one with early or late mild cognitive impairment, or one with AD.
    This strategy offers many advantages over previous attempts to combine machine learning with fMRI for AD diagnosis. Harshit Parmar, doctoral student at Texas Tech University and lead author of the study, explains that the most important aspect of their work lies in the qualities of their CNN architecture. The new design is simple yet effective for handling complex fMRI data, which can be fed as input to the CNN without any significant manipulation or modification of the data structure. In turn, this reduces the computational resources needed and allows the algorithm to make predictions faster.
    Can deep learning methods improve the field of AD detection and diagnosis? Parmar thinks so. “Deep learning CNNs could be used to extract functional biomarkers related to AD, which could be helpful in the early detection of AD-related dementia,” he explains.
    The researchers trained and tested their CNN with fMRI data from a public database, and the initial results were promising: the classification accuracy of their algorithm was as high as or higher than that of other methods.
    If these results hold up for larger datasets, their clinical implications could be tremendous. “Alzheimer’s has no cure yet. Although brain damage cannot be reversed, the progression of the disease can be reduced and controlled with medication,” according to the authors. “Our classifier can accurately identify the mild cognitive impairment stages which provide an early warning before progression into AD.” More

  • in

    How computer scientists and marketers can create a better CX with AI

    Researchers from Erasmus University, The Ohio State University, York University, and London Business School published a new paper in the Journal of Marketing that examines the tension between AI’s benefits and costs and then offers recommendations to guide managers and scholars investigating these challenges.
    The study, forthcoming in the Journal of Marketing, is titled “Consumers and Artificial Intelligence: An Experiential Perspective” and is authored by Stefano Puntoni, Rebecca Walker Reczek, Markus Giesler, and Simona Botti.
    Not long ago, artificial intelligence (AI) was the stuff of science fiction. Now it is changing how consumers eat, sleep, work, play, and even date. Consumers can interact with AI throughout the day, from Fitbit’s fitness tracker and Alibaba’s Tmall Genie smart speaker to Google Photo’s editing suggestions and Spotify’s music playlists. Given the growing ubiquity of AI in consumers’ lives, marketers operate in organizations with a culture increasingly shaped by computer science. Software developers’ objective of creating technical excellence, however, may not naturally align with marketers’ objective of creating valued consumer experiences. For example, computer scientists often characterize algorithms as neutral tools evaluated on efficiency and accuracy, an approach that may overlook the social and individual complexities of the contexts in which AI is increasingly deployed. Thus, whereas AI can improve consumers’ lives in very concrete and relevant ways, a failure to incorporate behavioral insight into technological developments may undermine consumers’ experiences with AI.
    This article seeks to bridge these two perspectives. On one hand, the researchers acknowledge the benefits that AI can provide to consumers. On the other hand, they build on and integrate sociological and psychological scholarship to examine the costs consumers can experience in their interactions with AI. As Puntoni explains, “A key problem with optimistic celebrations that view AI’s alleged accuracy and efficiency as automatic promoters of democracy and human inclusion is their tendency to efface intersectional complexities.”
    The article begins by presenting a framework that conceptualizes AI as an ecosystem with four capabilities: data capture, classification, delegation, and social. It focuses on the consumer experience of these capabilities, including the tensions felt. Reczek adds, “To articulate a customer-centric view of AI, we move attention away from the technology toward how the AI capabilities are experienced by consumers. Consumer experience relates to the interactions between the consumer and the company during the customer journey and encompasses multiple dimensions: emotional, cognitive, behavioral, sensorial, and social.”
    The researchers then discuss the experience of these tensions at a macro level, by exposing relevant and often explosive narratives in the sociological context, and at the micro level, by illustrating them with real-life examples grounded in relevant psychological literature. Using these insights, the researchers provide marketers with recommendations regarding how to learn about and manage the tensions. Paralleling the joint emphasis on social and individual responses, they outline both the organizational learning in which firms should engage to lead the deployment of consumer AI and concrete steps to design improved consumer AI experiences. The article closes with a research agenda that cuts across the four consumer experiences and ideas for how researchers might contribute new knowledge on this important topic.

    Story Source:
    Materials provided by American Marketing Association. Original written by Matt Weingarden. Note: Content may be edited for style and length. More