Show simple item record

dc.contributor.advisorByrne, Ruthen
dc.contributor.authorCelar, Lenarten
dc.date.accessioned2023-02-21T15:55:50Z
dc.date.available2023-02-21T15:55:50Z
dc.date.issued2023en
dc.date.submitted2023en
dc.identifier.citationCelar, Lenart, Explanations and Familiarity in XAI: How users understand predictions and make decisions using an AI support system, Trinity College Dublin.School of Psychology, 2023en
dc.identifier.otherYen
dc.descriptionAPPROVEDen
dc.description.abstractWe compared the effects of counterfactual and causal explanations for an Artificial Intelligence (AI) system?s decisions in a familiar domain (alcohol and driving) and an unfamiliar one (chemical safety) in four experiments (n=731). Participants were shown information given to an AI system, the decisions it made, and an explanation for each decision; they then attempted to predict the AI?s decisions (Experiments 1 and 2), or to make their own decisions (Experiments 3 and 4). The decisions the AI system made were correct (Experiments 1 and 3) or incorrect (Experiments 2 and 4). The results showed a dissociation between participants? subjective judgments that counterfactual explanations were more helpful than causal ones, and their objective accuracy in predicting the AI systems? decisions equally given counterfactual or causal explanations, extending previous research to show this dissociation occurred not only for a familiar domain but also an unfamiliar one; and only for an AI system that made correct decisions, not one that made incorrect decisions (Experiments 1 and 2). Importantly, the results showed the dissociation was eliminated when participants? made their own decisions rather than predicted the AI systems? decisions, that is, they tended to judge counterfactual explanations more helpful than causal ones, and also to make more accurate decisions given counterfactual explanations rather than causal ones; and they did so for familiar and unfamiliar domains, and only for an AI system that made correct decisions, not one that made incorrect decisions (Experiments 3 and 4). Participants judged explanations more helpful, and their judgments were more accurate, in the familiar domain than the unfamiliar one, only when the AI?s decisions were correct, i.e., Experiments 1 and 3. The implications for how people understand counterfactual explanations, and for their use in eXplainable AI (XAI) are discussed.en
dc.publisherTrinity College Dublin. School of Psychology. Discipline of Psychologyen
dc.rightsYen
dc.subjectExplainable AIen
dc.subjectXAIen
dc.subjectHuman reasoningen
dc.subjectDecision makingen
dc.subjectCounterfactualen
dc.subjectCausalen
dc.subjectExplanationsen
dc.subjectFamiliarityen
dc.subjectArtificial Intelligenceen
dc.titleExplanations and Familiarity in XAI: How users understand predictions and make decisions using an AI support systemen
dc.typeThesisen
dc.type.supercollectionthesis_dissertationsen
dc.type.supercollectionrefereed_publicationsen
dc.type.qualificationlevelMasters (Research)en
dc.identifier.peoplefinderurlhttps://tcdlocalportal.tcd.ie/pls/EnterApex/f?p=800:71:0::::P71_USERNAME:CELARLen
dc.identifier.rssinternalid250920en
dc.rights.ecaccessrightsopenAccess
dc.contributor.sponsorGoogleen
dc.contributor.sponsorOtheren
dc.identifier.urihttp://hdl.handle.net/2262/102183


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record