語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Efficient and Robust Deep Learning f...
~
Ozturkler, Batu Mehmet.
FindBook
Google Book
Amazon
博客來
Efficient and Robust Deep Learning for Medical Imaging and Natural Language Processing.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Efficient and Robust Deep Learning for Medical Imaging and Natural Language Processing./
作者:
Ozturkler, Batu Mehmet.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2023,
面頁冊數:
113 p.
附註:
Source: Dissertations Abstracts International, Volume: 85-11, Section: B.
Contained By:
Dissertations Abstracts International85-11B.
標題:
Deep learning. -
電子資源:
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=31049653
ISBN:
9798382634906
Efficient and Robust Deep Learning for Medical Imaging and Natural Language Processing.
Ozturkler, Batu Mehmet.
Efficient and Robust Deep Learning for Medical Imaging and Natural Language Processing.
- Ann Arbor : ProQuest Dissertations & Theses, 2023 - 113 p.
Source: Dissertations Abstracts International, Volume: 85-11, Section: B.
Thesis (Ph.D.)--Stanford University, 2023.
Deep learning (DL) has made remarkable progress in fields such as medical imaging and natural language processing (NLP). However, several challenges remain which limit its applicability in real-world settings. Firstly, solving complex tasks require large neural networks with high expressivity, which poses significant challenges in terms of time and memory efficiency in high-dimensional settings such as accelerated magnetic resonance imaging (MRI) reconstruction. Secondly, DL algorithms are often sensitive to distribution shifts between training and testing. For example, DL-based MR reconstruction methods fail dramatically under clinically-relevant distribution shifts such as noise, scanner-induced drifts, and anatomical changes. Similarly, in NLP, Large Language Models (LLMs) are sensitive to changes in the format of text inputs (prompts) such as order of words in a prompt. Thus, it is crucial to develop algorithms that are time and memory efficient, with improved robustness against distribution shifts.In this thesis, we address efficiency and robustness issues of existing DL techniques in a series of projects. First, we describe GLEAM, a memory efficient training strategy for MRI reconstruction that splits an end-to-end neural network into decoupled network modules. GLEAM leads to significant improvements in time and memory efficiency while improving reconstruction performance in high-dimensional settings. Then, we describe a consistency training method that uses both fully-sampled and undersampled scans for noise-robust MRI reconstruction called Noise2Recon. We show that Noise2Recon improves robustness over existing DL techniques using less amount of labeled data under low signal-to-noise ratio settings, and when generalizing to out-of-distribution acceleration factors.Next, we discuss methods to improve robustness of MRI reconstruction using diffusion models. The first method, termed SMRD, performs automatic hyperparameter selection at test time to enhance robustness under clinically-relevant distribution shifts. SMRD improves robustness under out-of-distribution measurement noise levels, acceleration factors, and anatomies, achieving a PSNR improvement of up to 6 dB under measurement noise. The second method, termed RED-diff, uses a variational inference approach based on a measurement consistency loss and a score matching regularization. RED-diff achieves 3x faster inference while using the same amount of memory.Finally, we present an efficient and robust probabilistic inference method for natural language reasoning termed ThinkSum. ThinkSum is a two-stage probabilistic inference algorithm which reasons over sets of objects or facts in a structured manner. In the first stage, a LLM is queried in parallel over a set of phrases extracted from the prompt or an auxiliary model call. In the second stage, the results of these queries are aggregated to make the final prediction. We show that ThinkSum improves performance on difficult NLP tasks and is more robust to prompt design compared to standard prompting techniques. Additionally, we show that ThinkSum can process the parallel queries to LLMs simultaneously to improve efficiency.
ISBN: 9798382634906Subjects--Topical Terms:
3554982
Deep learning.
Efficient and Robust Deep Learning for Medical Imaging and Natural Language Processing.
LDR
:04278nmm a2200349 4500
001
2398373
005
20240812064623.5
006
m o d
007
cr#unu||||||||
008
251215s2023 ||||||||||||||||| ||eng d
020
$a
9798382634906
035
$a
(MiAaPQ)AAI31049653
035
$a
(MiAaPQ)STANFORDyy800wj7651
035
$a
AAI31049653
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Ozturkler, Batu Mehmet.
$3
3768281
245
1 0
$a
Efficient and Robust Deep Learning for Medical Imaging and Natural Language Processing.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2023
300
$a
113 p.
500
$a
Source: Dissertations Abstracts International, Volume: 85-11, Section: B.
500
$a
Advisor: Pauly, John;Pilanci, Mert;Vasanawala, Shreyas.
502
$a
Thesis (Ph.D.)--Stanford University, 2023.
520
$a
Deep learning (DL) has made remarkable progress in fields such as medical imaging and natural language processing (NLP). However, several challenges remain which limit its applicability in real-world settings. Firstly, solving complex tasks require large neural networks with high expressivity, which poses significant challenges in terms of time and memory efficiency in high-dimensional settings such as accelerated magnetic resonance imaging (MRI) reconstruction. Secondly, DL algorithms are often sensitive to distribution shifts between training and testing. For example, DL-based MR reconstruction methods fail dramatically under clinically-relevant distribution shifts such as noise, scanner-induced drifts, and anatomical changes. Similarly, in NLP, Large Language Models (LLMs) are sensitive to changes in the format of text inputs (prompts) such as order of words in a prompt. Thus, it is crucial to develop algorithms that are time and memory efficient, with improved robustness against distribution shifts.In this thesis, we address efficiency and robustness issues of existing DL techniques in a series of projects. First, we describe GLEAM, a memory efficient training strategy for MRI reconstruction that splits an end-to-end neural network into decoupled network modules. GLEAM leads to significant improvements in time and memory efficiency while improving reconstruction performance in high-dimensional settings. Then, we describe a consistency training method that uses both fully-sampled and undersampled scans for noise-robust MRI reconstruction called Noise2Recon. We show that Noise2Recon improves robustness over existing DL techniques using less amount of labeled data under low signal-to-noise ratio settings, and when generalizing to out-of-distribution acceleration factors.Next, we discuss methods to improve robustness of MRI reconstruction using diffusion models. The first method, termed SMRD, performs automatic hyperparameter selection at test time to enhance robustness under clinically-relevant distribution shifts. SMRD improves robustness under out-of-distribution measurement noise levels, acceleration factors, and anatomies, achieving a PSNR improvement of up to 6 dB under measurement noise. The second method, termed RED-diff, uses a variational inference approach based on a measurement consistency loss and a score matching regularization. RED-diff achieves 3x faster inference while using the same amount of memory.Finally, we present an efficient and robust probabilistic inference method for natural language reasoning termed ThinkSum. ThinkSum is a two-stage probabilistic inference algorithm which reasons over sets of objects or facts in a structured manner. In the first stage, a LLM is queried in parallel over a set of phrases extracted from the prompt or an auxiliary model call. In the second stage, the results of these queries are aggregated to make the final prediction. We show that ThinkSum improves performance on difficult NLP tasks and is more robust to prompt design compared to standard prompting techniques. Additionally, we show that ThinkSum can process the parallel queries to LLMs simultaneously to improve efficiency.
590
$a
School code: 0212.
650
4
$a
Deep learning.
$3
3554982
650
4
$a
Natural language processing.
$3
1073412
650
4
$a
Human performance.
$3
3562051
650
4
$a
Medical imaging.
$3
3172799
650
4
$a
Brain research.
$3
3561789
650
4
$a
Magnetic resonance imaging.
$3
554355
650
4
$a
Neural networks.
$3
677449
650
4
$a
Medical research.
$2
bicssc
$3
1556686
650
4
$a
Medicine.
$3
641104
650
4
$a
Neurosciences.
$3
588700
690
$a
0574
690
$a
0800
690
$a
0564
690
$a
0317
710
2
$a
Stanford University.
$3
754827
773
0
$t
Dissertations Abstracts International
$g
85-11B.
790
$a
0212
791
$a
Ph.D.
792
$a
2023
793
$a
English
856
4 0
$u
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=31049653
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9506693
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入