Understanding the Social and Technical Challenges of Fairness in Digital Health
Citation:
Ryan, Seamus, Understanding the Social and Technical Challenges of Fairness in Digital Health, Trinity College Dublin, School of Computer Science & Statistics, Computer Science, 2024Download Item:
Abstract:
Fairness is broadly defined as the act of treating similar people similarly and in line with social norms. It is not a new topic of research. Fairness, as it relates to ethics and justice, has been a topic of discussion for the last 2,000+ years. These discussions ranged from conceptual in nature, focused on broad sociological goals, to specific and individual, looking at the ways a person should treat others. This need to understand fairness has always been critical in the building of just and ethical decisions making systems. With the advent of decision making technologies, using machine learning, it has become an important part of conversation in the computer science community. For these technologies, fairness stands alongside accountability and transparency in making up what is considered ethical machine learning. The benchmarks, processes, and evaluation techniques that can be used to check if a decision is fair have been codified by the academic community into what are called fairness definitions. However, lacking a common language and approaching the problem from different perspectives, these definitions are disparate in their focus and mechanism of analysis. This thesis will begin by indexing and categorising 28 definitions of fairness into a common structure then exploring the complex requirements and assumptions each definition makes. This indexing alone is not enough to understand the nature of fairness as it is highly domain, scenario, and culture-specific. Within digital healthcare, a domain with the potential for significant benefit from machine learning augmentation, fairness is further complicated by the inherently tailored nature of diagnosis and treatment. As such, given the high-stakes nature of medical decisions, understanding the fairness challenges, and the corresponding approaches to deal with these challenges, is a prerequisite step before machine learning can see broad healthcare adoption.
This thesis sets out to establish the role that designers and developers play in the creation of fair machine learning augmented digital healthcare. This thesis will argue that the nature of what is perceived as fairness requires those designers and developers to step beyond the immediate evaluation approaches to fairness, common in ML literature, and include how machine learning augmented applications are built, evaluated, and managed, with these goals being defined in tandem with users, patient advocates, and healthcare professionals. To do this, this thesis sets out to address three distinct yet related research objectives. Firstly, this thesis aims to form an understanding of how users perceive and understand fairness in healthcare technology, focused on discussions of the nature of anti-discrimination in pandemic technology. This includes the perspective of the users who saw some level of discrimination as inevitable and advocated for applying rules consistently. Secondly, it looks at how subject matter experts in the field of machine learning currently approach fairness challenges including the blockers to current fairness adoption. In this, we will examine fairness as a topic that involves a complex network of perspectives and priorities. This analysis will also dive into the effect that a lack of high-quality data has on fairness, as well as how the context in which a model is used, and the credence that is given impacts its perceived fairness. Finally, this thesis will detail how designers and engineers in the area of healthcare treatment prioritise different forms of fairness. It analyses how the engineering methods used in the creation of the applications are linked with the fairness objectives. With this study, we detail a conceptual model identifying the approaches to building, managing, and evaluating healthcare-enabled ML, and identify the stakeholders involved in these activities. This work also discusses how the nature of the clinician and model collaboration affects how fairness is prioritised and that the design considerations for building a fair application change as the scale of adoption changes.
Reflecting upon these studies, this thesis addresses the research objectives of understanding how fairness is perceived by users, by those who build and design the applications, and how it will be perceived within the domain of digital healthcare support.These objectives make a contribution to the field of Human-Computer Interaction by expanding our understanding of the role of experts and users in designing fair machine learning augmented applications.
Sponsor
Grant Number
Science Foundation Ireland (SFI)
Description:
APPROVED
Author: Ryan, Seamus
Sponsor:
Science Foundation Ireland (SFI)Advisor:
Doherty, GavinPublisher:
Trinity College Dublin. School of Computer Science & Statistics. Discipline of Computer ScienceType of material:
ThesisCollections
Availability:
Full text availableSubject:
Fairness, Healthcare, Ethics, HCIMetadata
Show full item recordThe following license files are associated with this item: