語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Foundation Models for Robust Machine...
~
Kumar, Ananya,
FindBook
Google Book
Amazon
博客來
Foundation Models for Robust Machine Learning /
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Foundation Models for Robust Machine Learning // Ananya Kumar.
作者:
Kumar, Ananya,
面頁冊數:
1 electronic resource (244 pages)
附註:
Source: Dissertations Abstracts International, Volume: 85-04, Section: B.
Contained By:
Dissertations Abstracts International85-04B.
標題:
Adaptation. -
電子資源:
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=30615167
ISBN:
9798380482653
Foundation Models for Robust Machine Learning /
Kumar, Ananya,
Foundation Models for Robust Machine Learning /
Ananya Kumar. - 1 electronic resource (244 pages)
Source: Dissertations Abstracts International, Volume: 85-04, Section: B.
Machine learning systems are not robust to distribution shifts-they suffer large drops in accuracy when deployed in different environments from what they were trained on. For example when satellite remote sensing models are deployed in new countries, tumor detection models are deployed in new hospitals, or wildlife conservation models are deployed in new forests, they face large drops in accuracy. In this thesis, we show that the foundation model paradigm is a principled solution that leads to state-of-the-art robustness. The foundation model paradigm consists of three steps: pretraining a model on diverse unlabeled data (e.g., satellite images from around the world) to learn general-purpose representations, adapting these models to downstream tasks that we care about, and then deploying these models in the real world. This thesis will focus on understanding and improving each of these steps for robustness. (1) First, we show that pretraining on unlabeled data learns transferable representations that improves accuracy even on domains where we had no labels. We explain why pretraining can work in a very different way from some classical intuitions of collapsing representations (domain invariance). Our theory predicts phenomena on real datasets, and leads to improved pretraining methods. (2) Next, we will show that the standard approach of adaptation (updating all the model's parameters) can distort pretrained representations and perform poorly out-of-distribution. Our theoretical analysis leads to better methods for adaptation and state-of-the-art accuracies on ImageNet and in applications such as satellite remote sensing, wildlife conservation, and radiology. (3) Finally, when we deploy models in the real world, the data distribution evolves over time which leads to a drop in model performance. We show that self-training on a model's own predictions can improve robustness to distribution shift, and explain when and why self-training works.
English
ISBN: 9798380482653Subjects--Topical Terms:
3562958
Adaptation.
Foundation Models for Robust Machine Learning /
LDR
:03247nmm a22003733i 4500
001
2400456
005
20250522084129.5
006
m o d
007
cr|nu||||||||
008
251215s2023 miu||||||m |||||||eng d
020
$a
9798380482653
035
$a
(MiAaPQD)AAI30615167
035
$a
(MiAaPQD)STANFORDgt661gq6831
035
$a
AAI30615167
040
$a
MiAaPQD
$b
eng
$c
MiAaPQD
$e
rda
100
1
$a
Kumar, Ananya,
$e
author.
$3
3770455
245
1 0
$a
Foundation Models for Robust Machine Learning /
$c
Ananya Kumar.
264
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2023
300
$a
1 electronic resource (244 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Dissertations Abstracts International, Volume: 85-04, Section: B.
500
$a
Advisors: Liang, Percy; Tengyu, Tengyu Committee members: Bent, Stacey F.
502
$b
Ph.D.
$c
Stanford University
$d
2023.
520
$a
Machine learning systems are not robust to distribution shifts-they suffer large drops in accuracy when deployed in different environments from what they were trained on. For example when satellite remote sensing models are deployed in new countries, tumor detection models are deployed in new hospitals, or wildlife conservation models are deployed in new forests, they face large drops in accuracy. In this thesis, we show that the foundation model paradigm is a principled solution that leads to state-of-the-art robustness. The foundation model paradigm consists of three steps: pretraining a model on diverse unlabeled data (e.g., satellite images from around the world) to learn general-purpose representations, adapting these models to downstream tasks that we care about, and then deploying these models in the real world. This thesis will focus on understanding and improving each of these steps for robustness. (1) First, we show that pretraining on unlabeled data learns transferable representations that improves accuracy even on domains where we had no labels. We explain why pretraining can work in a very different way from some classical intuitions of collapsing representations (domain invariance). Our theory predicts phenomena on real datasets, and leads to improved pretraining methods. (2) Next, we will show that the standard approach of adaptation (updating all the model's parameters) can distort pretrained representations and perform poorly out-of-distribution. Our theoretical analysis leads to better methods for adaptation and state-of-the-art accuracies on ImageNet and in applications such as satellite remote sensing, wildlife conservation, and radiology. (3) Finally, when we deploy models in the real world, the data distribution evolves over time which leads to a drop in model performance. We show that self-training on a model's own predictions can improve robustness to distribution shift, and explain when and why self-training works.
546
$a
English
590
$a
School code: 0212
650
4
$a
Adaptation.
$3
3562958
650
4
$a
Connectivity.
$3
3560754
690
$a
0800
710
2
$a
Stanford University.
$e
degree granting institution.
$3
3765820
720
1
$a
Liang, Percy
$e
degree supervisor.
720
1
$a
Tengyu, Tengyu
$e
degree supervisor.
773
0
$t
Dissertations Abstracts International
$g
85-04B.
790
$a
0212
791
$a
Ph.D.
792
$a
2023
856
4 0
$u
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=30615167
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9508776
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入