語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
FindBook
Google Book
Amazon
博客來
Is Explainable Medical AI a Medical Reversal Waiting to Happen? Exploring the Impact of AI Explanations on Clinical Decision Quality.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Is Explainable Medical AI a Medical Reversal Waiting to Happen? Exploring the Impact of AI Explanations on Clinical Decision Quality./
作者:
Clement, Jeffrey.
面頁冊數:
1 online resource (275 pages)
附註:
Source: Dissertations Abstracts International, Volume: 85-02, Section: B.
Contained By:
Dissertations Abstracts International85-02B.
標題:
Information science. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=30529442click for full text (PQDT)
ISBN:
9798379958619
Is Explainable Medical AI a Medical Reversal Waiting to Happen? Exploring the Impact of AI Explanations on Clinical Decision Quality.
Clement, Jeffrey.
Is Explainable Medical AI a Medical Reversal Waiting to Happen? Exploring the Impact of AI Explanations on Clinical Decision Quality.
- 1 online resource (275 pages)
Source: Dissertations Abstracts International, Volume: 85-02, Section: B.
Thesis (Ph.D.)--University of Minnesota, 2023.
Includes bibliographical references
Medical AI systems generate personalized recommendations to improve patient care, but to have an impact, the system recommendation must be different from what the clinician would do on their own. This impact might be beneficial (in the case of a high-quality recommendation), and clinicians should follow the system; on the other hand, AI will be wrong occasionally and clinicians should override it. Resolving conflict between their own judgment and the system's is crucial to optimal AI-augmented clinical decision-making. To help resolve such conflict, there have been recent calls to design AI systems that provide explanations for the reasoning behind their recommendations, but it is unclear how system explanations affect how clinicians incorporate AI recommendations into care decisions. This dissertation explores this issue against the history of medical reversals-technologies and treatments that were initially thought beneficial but ultimately were ineffective or even harmful to patients. Ideally, AI explanations are helpful; however, practicing evidence-based medicine requires that we validate this normative statement to ensure that explanations are not either ineffective or worse, actively harmful.We employ mixed methods, combining semi-structured interviews and three computer-based experiments to examine factors posited to support proper system use. To ground the findings, Study 1 interviews with clinicians highlighted that disclosing the confidence level and explanations for AI recommendations might create new conflicts between the system and clinician. To evaluate this within the clinical context, we conducted a trio of experiments where clinical experts were faced with drug dosing decision tasks. In Study 2, participants received AI recommendations for drug dosing and were shown (or not) the confidence level and an explanation. In Study 3, participants were shown explanations (or not) and received two patient cases each with either a high- or low-quality AI recommendation. Contrary to theoretical predictions, providing explanations did not uniformly increase the influence of AI or improve clinical decision quality. Instead, explanations increased the influence of low-quality AI recommendations and decreased the influence of high-quality recommendations. In Study 4, participants were shown one of four types of explanations along with either a high- or low-quality recommendation. Again, explanations did not optimally improve the influence of AI or improve decision-quality. Two relevant findings help explain why. First, the initial disagreement between the users' a priori judgment and the AI recommendation, along with the quality of the recommendation, were much more influential than any of our explanation formats. And second, the qualitative and mediation analyses indicate that clinicians do carefully consider explanations, but the way that explanations impact overall perceptions of explanation detail and helpfulness of the explanation or decision conflict do not necessarily explain the observed effects. We see indications that the individual information cues in the explanations are compared against the users' own knowledge and experience, and the way discrepancies between the users' own opinion about these cues and the conclusions the system purports to draw from them can influence use of the recommendation. Further work is needed to understand exactly how the information from explanations is shaping use of AI advice.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2023
Mode of access: World Wide Web
ISBN: 9798379958619Subjects--Topical Terms:
554358
Information science.
Subjects--Index Terms:
Medical AI systemsIndex Terms--Genre/Form:
542853
Electronic books.
Is Explainable Medical AI a Medical Reversal Waiting to Happen? Exploring the Impact of AI Explanations on Clinical Decision Quality.
LDR
:04931nmm a2200397K 4500
001
2363976
005
20231127094749.5
006
m o d
007
cr mn ---uuuuu
008
241011s2023 xx obm 000 0 eng d
020
$a
9798379958619
035
$a
(MiAaPQ)AAI30529442
035
$a
AAI30529442
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
$d
NTU
100
1
$a
Clement, Jeffrey.
$3
3704757
245
1 0
$a
Is Explainable Medical AI a Medical Reversal Waiting to Happen? Exploring the Impact of AI Explanations on Clinical Decision Quality.
264
0
$c
2023
300
$a
1 online resource (275 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Dissertations Abstracts International, Volume: 85-02, Section: B.
500
$a
Advisor: Curley, Shawn;Ren, Yuqing.
502
$a
Thesis (Ph.D.)--University of Minnesota, 2023.
504
$a
Includes bibliographical references
520
$a
Medical AI systems generate personalized recommendations to improve patient care, but to have an impact, the system recommendation must be different from what the clinician would do on their own. This impact might be beneficial (in the case of a high-quality recommendation), and clinicians should follow the system; on the other hand, AI will be wrong occasionally and clinicians should override it. Resolving conflict between their own judgment and the system's is crucial to optimal AI-augmented clinical decision-making. To help resolve such conflict, there have been recent calls to design AI systems that provide explanations for the reasoning behind their recommendations, but it is unclear how system explanations affect how clinicians incorporate AI recommendations into care decisions. This dissertation explores this issue against the history of medical reversals-technologies and treatments that were initially thought beneficial but ultimately were ineffective or even harmful to patients. Ideally, AI explanations are helpful; however, practicing evidence-based medicine requires that we validate this normative statement to ensure that explanations are not either ineffective or worse, actively harmful.We employ mixed methods, combining semi-structured interviews and three computer-based experiments to examine factors posited to support proper system use. To ground the findings, Study 1 interviews with clinicians highlighted that disclosing the confidence level and explanations for AI recommendations might create new conflicts between the system and clinician. To evaluate this within the clinical context, we conducted a trio of experiments where clinical experts were faced with drug dosing decision tasks. In Study 2, participants received AI recommendations for drug dosing and were shown (or not) the confidence level and an explanation. In Study 3, participants were shown explanations (or not) and received two patient cases each with either a high- or low-quality AI recommendation. Contrary to theoretical predictions, providing explanations did not uniformly increase the influence of AI or improve clinical decision quality. Instead, explanations increased the influence of low-quality AI recommendations and decreased the influence of high-quality recommendations. In Study 4, participants were shown one of four types of explanations along with either a high- or low-quality recommendation. Again, explanations did not optimally improve the influence of AI or improve decision-quality. Two relevant findings help explain why. First, the initial disagreement between the users' a priori judgment and the AI recommendation, along with the quality of the recommendation, were much more influential than any of our explanation formats. And second, the qualitative and mediation analyses indicate that clinicians do carefully consider explanations, but the way that explanations impact overall perceptions of explanation detail and helpfulness of the explanation or decision conflict do not necessarily explain the observed effects. We see indications that the individual information cues in the explanations are compared against the users' own knowledge and experience, and the way discrepancies between the users' own opinion about these cues and the conclusions the system purports to draw from them can influence use of the recommendation. Further work is needed to understand exactly how the information from explanations is shaping use of AI advice.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2023
538
$a
Mode of access: World Wide Web
650
4
$a
Information science.
$3
554358
650
4
$a
Health sciences.
$3
3168359
650
4
$a
Computer science.
$3
523869
653
$a
Medical AI systems
653
$a
Health care decision
653
$a
Medical reversal
653
$a
Clinical decision quality
655
7
$a
Electronic books.
$2
lcsh
$3
542853
690
$a
0723
690
$a
0566
690
$a
0984
690
$a
0800
710
2
$a
ProQuest Information and Learning Co.
$3
783688
710
2
$a
University of Minnesota.
$b
Business Administration.
$3
1022265
773
0
$t
Dissertations Abstracts International
$g
85-02B.
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=30529442
$z
click for full text (PQDT)
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9486332
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入