Machine learning-augmented applications have the potential to be powerful tools for decision-making in healthcare. However, healthcare is a complex domain that presents many challenges. These challenges, such as medical errors, clinician–patient relationships and treatment preferences, must be addressed to ensure fairness in ML-augmented healthcare applications. To better understand the influence these challenges have on fairness, 16 experienced engineers and designers with domain knowledge in healthcare technology were interviewed about how they would prioritise fairness in 3 healthcare scenarios (well-being improvement, chronic illness management, acute illness treatment). Using a template analysis, this work identifies the key considerations in the creation of fair ML for healthcare. These considerations clustered into categories related to technology, healthcare context and user perspectives. To explore these categories, we propose the stakeholder fairness conceptual model. This framework aids designers and developers in understanding the complex considerations that stem from the building, management and evaluation of ML-augmented healthcare applications, and how they affect the expectations of fairness. This work then discusses how this model may be applied when the health technology is directly provisioned to users, without a healthcare provider managing its use or adoption. This article contributes to the understanding of fairness requirements in healthcare, including the effect of healthcare errors, clinician-application collaboration and how the evaluation of healthcare technology becomes part of the fairness design process.
