語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
FindBook
Google Book
Amazon
博客來
Explain and Improve Natural Language Processing Systems with Human Insights.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Explain and Improve Natural Language Processing Systems with Human Insights./
作者:
Zhao, Xinyan.
面頁冊數:
1 online resource (105 pages)
附註:
Source: Dissertations Abstracts International, Volume: 84-04, Section: A.
Contained By:
Dissertations Abstracts International84-04A.
標題:
Health sciences. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=29705092click for full text (PQDT)
ISBN:
9798845449573
Explain and Improve Natural Language Processing Systems with Human Insights.
Zhao, Xinyan.
Explain and Improve Natural Language Processing Systems with Human Insights.
- 1 online resource (105 pages)
Source: Dissertations Abstracts International, Volume: 84-04, Section: A.
Thesis (Ph.D.)--University of Michigan, 2022.
Includes bibliographical references
Human insights play an essential role in artificial intelligence (AI) systems as it increases the coherence between the human mind and system decisions by allowing the systems to be interpretable or aided by human input. However, the use of human insights in AI systems remains mysterious, despite the fact that their predictive power has demonstrated high correspondence regarding system output. This missing bridge between human and AI is caused mainly by the black-box nature of deep learning-based models. It often presents two challenges in applying these systems to critical domains such as healthcare, finance, or law practice. First, the low coherence between humans and AI often leads to systems with high performance yet lack human trust. Interpreting a system decision demands simple, faithful, and consistent explanations for humans to understand. Failing to provide such explanations decreases human trust in the system and results in serious outcomes. The second challenge concerns the large-scale data annotation that is usually expensive and impractical in the above critical domains as it requires specially trained domain expertise.To advance the AI applications in critical domains, in this dissertation, I explore methods that increase the coherence between humans and AI while maintaining high system correspondence. I look at AI systems from the "pipeline'' (instead of "model-only'') perspective and focus on designing "grey-box'' pipelines that have not only high system correspondence but also high coherence between humans and the pipeline.I investigate two directions. The first direction investigates if we could improve the human-AI coherence by mimicking human behavior with high-correspondence black-box models. The second direction investigates if mimicking human insights could further improve system correspondence.To investigate the first direction, I evaluate how black-box models could mimic human insights as explanations in various formats, including natural language, extractive data snippets, and human heuristics. In Chapter 3, I introduce an approach that generates natural language explanations based on human rationales that improve the explanation quality. Furthermore, in Chapter 4, I conduct a study in a clinical scenario where the system predicts the medication use of patients with inflammatory bowel disease (IBD) based on patients' medical records. I show that sentence-based explanations could be well predicted by using heuristics-based weak supervision with minimal annotation or annotation-based supervised learning with large annotation budgets. Lastly, in Chapter 5, I examine if black-box models could well mimic human heuristics. Using adverse drug event identification as an example, I show that while human heuristics could be easily implemented into interpretable rule-based or white-box systems, they are often over-simplified. To overcome this limitation, I show that the semantics of heuristics could be well augmented by the black-box model with a light-loaded annotation effort from human-in-the-loop.To investigate the second direction, I further evaluate if human insights could improve system correspondence. In Chapter 3, I show that high-quality language explanations bring system correspondence gain. In Chapter 4, I show that identifying evidence support from patient notes brings substantial performance gain for the inference of IBD medication history. In Chapter 5, I show that the semantically augmented human heuristics could be easily applied to weak supervision and generate strong correspondence.This dissertation contributes to the field of explainable AI. More specifically, this dissertation provides opportunities for designing AI systems that desire explanations to increase human trust in critical domains by collaborating with human insights.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2023
Mode of access: World Wide Web
ISBN: 9798845449573Subjects--Topical Terms:
3168359
Health sciences.
Subjects--Index Terms:
Natural LanguageIndex Terms--Genre/Form:
542853
Electronic books.
Explain and Improve Natural Language Processing Systems with Human Insights.
LDR
:05213nmm a2200397K 4500
001
2358348
005
20230731112625.5
006
m o d
007
cr mn ---uuuuu
008
241011s2022 xx obm 000 0 eng d
020
$a
9798845449573
035
$a
(MiAaPQ)AAI29705092
035
$a
(MiAaPQ)umichrackham004601
035
$a
AAI29705092
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
$d
NTU
100
1
$a
Zhao, Xinyan.
$3
3698884
245
1 0
$a
Explain and Improve Natural Language Processing Systems with Human Insights.
264
0
$c
2022
300
$a
1 online resource (105 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Dissertations Abstracts International, Volume: 84-04, Section: A.
500
$a
Advisor: Vydiswaran, V. G. Vinod.
502
$a
Thesis (Ph.D.)--University of Michigan, 2022.
504
$a
Includes bibliographical references
520
$a
Human insights play an essential role in artificial intelligence (AI) systems as it increases the coherence between the human mind and system decisions by allowing the systems to be interpretable or aided by human input. However, the use of human insights in AI systems remains mysterious, despite the fact that their predictive power has demonstrated high correspondence regarding system output. This missing bridge between human and AI is caused mainly by the black-box nature of deep learning-based models. It often presents two challenges in applying these systems to critical domains such as healthcare, finance, or law practice. First, the low coherence between humans and AI often leads to systems with high performance yet lack human trust. Interpreting a system decision demands simple, faithful, and consistent explanations for humans to understand. Failing to provide such explanations decreases human trust in the system and results in serious outcomes. The second challenge concerns the large-scale data annotation that is usually expensive and impractical in the above critical domains as it requires specially trained domain expertise.To advance the AI applications in critical domains, in this dissertation, I explore methods that increase the coherence between humans and AI while maintaining high system correspondence. I look at AI systems from the "pipeline'' (instead of "model-only'') perspective and focus on designing "grey-box'' pipelines that have not only high system correspondence but also high coherence between humans and the pipeline.I investigate two directions. The first direction investigates if we could improve the human-AI coherence by mimicking human behavior with high-correspondence black-box models. The second direction investigates if mimicking human insights could further improve system correspondence.To investigate the first direction, I evaluate how black-box models could mimic human insights as explanations in various formats, including natural language, extractive data snippets, and human heuristics. In Chapter 3, I introduce an approach that generates natural language explanations based on human rationales that improve the explanation quality. Furthermore, in Chapter 4, I conduct a study in a clinical scenario where the system predicts the medication use of patients with inflammatory bowel disease (IBD) based on patients' medical records. I show that sentence-based explanations could be well predicted by using heuristics-based weak supervision with minimal annotation or annotation-based supervised learning with large annotation budgets. Lastly, in Chapter 5, I examine if black-box models could well mimic human heuristics. Using adverse drug event identification as an example, I show that while human heuristics could be easily implemented into interpretable rule-based or white-box systems, they are often over-simplified. To overcome this limitation, I show that the semantics of heuristics could be well augmented by the black-box model with a light-loaded annotation effort from human-in-the-loop.To investigate the second direction, I further evaluate if human insights could improve system correspondence. In Chapter 3, I show that high-quality language explanations bring system correspondence gain. In Chapter 4, I show that identifying evidence support from patient notes brings substantial performance gain for the inference of IBD medication history. In Chapter 5, I show that the semantically augmented human heuristics could be easily applied to weak supervision and generate strong correspondence.This dissertation contributes to the field of explainable AI. More specifically, this dissertation provides opportunities for designing AI systems that desire explanations to increase human trust in critical domains by collaborating with human insights.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2023
538
$a
Mode of access: World Wide Web
650
4
$a
Health sciences.
$3
3168359
650
4
$a
Computer science.
$3
523869
650
4
$a
Information science.
$3
554358
653
$a
Natural Language
653
$a
Explainable AI
653
$a
Human-centered AI
655
7
$a
Electronic books.
$2
lcsh
$3
542853
690
$a
0800
690
$a
0984
690
$a
0566
690
$a
0723
710
2
$a
ProQuest Information and Learning Co.
$3
783688
710
2
$a
University of Michigan.
$b
Information.
$3
2100858
773
0
$t
Dissertations Abstracts International
$g
84-04A.
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=29705092
$z
click for full text (PQDT)
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9480704
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入