ObjectivAIze: Measuring Performance and Biases in Augmented Business Decision Systems
Abstract
Business process management organizes flows of information and decisions in large organizations. These systems now integrate algorithmic decision aids leveraging machine learning: each time a stakeholder needs to make a decision, such as a purchase, a quote, or hiring someone, the software leverages the inputs and outcomes of similar past decisions to provide guidance, as a recommendation. If the confidence is high, the process may be automated. Otherwise, it may still help provide consistency in the decisions. Yet, we may question how these aids affect task performance. Can we measure an improvement? Can hidden biases influence decision makers negatively? What is the impact of various presentation options? To address those issues, we propose metrics of performance, automation bias and resistance. We validated those measures with an online study. Our aim is to instrument those systems to secure their benefits. In a first experiment, we study effective collaboration. Faced with a decision, subjects alone have a success rate of 72%; Aided by a recommender that has a 75% success rate, their success rate reaches 76%. The human-system collaboration had thus a greater success rate than each taken alone. However, we noted a complacency/authority bias that degraded the quality of decisions by 5% when the recommender was wrong. This suggests that any lingering algorithmic bias may be amplified by decision aids. In a second experiment, we evaluated the effectiveness of 5 presentation variants in reducing complacency bias. We found that optional presentation increases subjects’ resistance to wrong recommendations. We intend to leverage these findings to guide the design of human-algorithm collaboration in financial compliance alert filtering.
Domains
Computer Science [cs]Origin | Files produced by the author(s) |
---|