Human input boosts citizens’ acceptance of AI and perceptions of fairness, study shows
Increasing human input when AI is used for public services boosts acceptance of the technology, a new study shows.
The research shows citizens are not only concerned about AI fairness but also about potential human biases. They are in favour of AI being used in cases when administrative discretion is perceived as too large.
Researchers found citizens’ knowledge about AI does not alter their acceptance of the technology. More accurate systems and lower cost systems also increased their acceptance. Cost and accuracy of technology mattered more to them than human involvement.
The study, by Laszlo Horvath from Birkbeck, University of London and Oliver James, Susan Banducci and Ana Beduschi from the University of Exeter, is published in the journal Government Information Quarterly.
Academics carried out an experiment with 2,143 people in the UK. Respondents were asked to select if they would prefer more or less AI in systems to process immigration visas and parking permits.
Researchers found more human involvement tended to increase acceptance of AI. Yet, when substantial human discretion was introduced in parking permit scenarios, respondents preferred more limited human input.
System-level factors such as high accuracy, the presence of an appeals system, increased transparency, reduced cost, non-sharing of data, and the absence of private company involvement all boosted both acceptance and perceived procedural fairness. More