Fairness trade-offs in hiring: what people prefer and what engineers can build

Authors Pavlo Burda, Sieuwert van Otterloo
Published in Proceedings of the 2026 Conference on Human Centred Artificial Intelligence - Education and Practice
Publication date 2026
Research groups Artificial Intelligence
Type Lecture

Summary

Human-centered AI must confront tensions between mutually incompatible fairness definitions and fairness requirements of algorithmic decision-making (ADM) systems. To investigate how people perceive this trade-off and how this perception can guide engineering requirements, we determine the underlying principles of common fairness metrics in the form of statements that people may or may not agree with. Using an illustrative dataset, we show how favored metrics can conflict in practice, underscoring the need for explicit trade-offs and how to solve them. We design and evaluate a survey that can be used to determine the preferences of stakeholders in a hiring scenario by mapping 12 statements to demographic parity, equal opportunity (TPR), predictive equality (FPR), predictive parity (PPV), fairness through unawareness, and individual fairness definitions. Responses (N=51) indicate broad support for excluding sensitive attributes and for error-rate parity criteria (FPR-TPR), with contrasting views on demographic parity under unequal base rates. We contribute a requirements-elicitation approach that can be used to define ‘fairness requirements’ of an ADM system by mapping stakeholder preferences to concrete metrics, yielding a pragmatic set of recommended requirements using our hiring scenario as a guiding example.

Downloads en links

On this publication contributed

Language English
Published in Proceedings of the 2026 Conference on Human Centred Artificial Intelligence - Education and Practice
ISBN/ISSN URN:ISBN:979-8-4007-2153-3
Key words fairness, decision-making, software requirements
Digital Object Identifier 10.1145/3777490.3777496
Page range 27-33

Artificial Intelligence