語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Statistical Analysis of Scientific M...
~
Lu, Yiping,
FindBook
Google Book
Amazon
博客來
Statistical Analysis of Scientific Machine Learning /
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Statistical Analysis of Scientific Machine Learning // Yiping Lu.
作者:
Lu, Yiping,
面頁冊數:
1 electronic resource (158 pages)
附註:
Source: Dissertations Abstracts International, Volume: 85-06, Section: B.
Contained By:
Dissertations Abstracts International85-06B.
標題:
Decision making. -
電子資源:
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=30726802
ISBN:
9798381019001
Statistical Analysis of Scientific Machine Learning /
Lu, Yiping,
Statistical Analysis of Scientific Machine Learning /
Yiping Lu. - 1 electronic resource (158 pages)
Source: Dissertations Abstracts International, Volume: 85-06, Section: B.
Massive data collection and computational capabilities have enabled data-driven scientific discoveries and control of engineering systems. However, there are still several questions that should be answered to understand the fundamental limits of just how much can be discovered with data and what is the value of additional information. For example, 1) How can we learn a physics law or economic principle purely from data? 2) How hard is this task, both computationally and statistically? 3) What's the impact on hardness when we add further information (e.g., adding data, model information)? I'll answer these three questions in this thesis in two learning tasks. A key insight in both two cases is that using direct plug-in estimators can result in statistically suboptimal inference.For the first learning task, the thesis focus on variational formulations for differential equation models. I discuss a prototypical Poisson equation. I provide a minimax lower bound for this problem. Based on the lower bounds, I discover that the variance in the direct plug-in estimator makes sample complexity suboptimal. I also consider the optimization dynamic for different variational forms. Finally, based on our theory, I explain an implicit acceleration of using a Sobolev norm as the objective function for training.The second learning task this thesis discuss is (linear) operator learning, which has wide applications in causal inference, time series modeling, and conditional probability learning. I build the first min-max lower bound for this problem. The min-max rate has a particular structure where the more challenging parts of the input and output spaces determine the hardness of learning a linear operator. Analysis also shows that an intuitive discretization of the infinite-dimensional operator could lead to a sub-optimal statistical learning rate. Then, I'll discuss how, by suitably trading-off bias and variance, I can construct an estimator with an optimal learning rate for learning a linear operator between infinite dimension spaces. I also illustrate how this theory can inspire a multilevel machine-learning algorithm of potential practical use.
English
ISBN: 9798381019001Subjects--Topical Terms:
517204
Decision making.
Statistical Analysis of Scientific Machine Learning /
LDR
:03463nmm a22003733i 4500
001
2400488
005
20250522084137.5
006
m o d
007
cr|nu||||||||
008
251215s2023 miu||||||m |||||||eng d
020
$a
9798381019001
035
$a
(MiAaPQD)AAI30726802
035
$a
(MiAaPQD)STANFORDpp190dc8926
035
$a
AAI30726802
040
$a
MiAaPQD
$b
eng
$c
MiAaPQD
$e
rda
100
1
$a
Lu, Yiping,
$e
author.
$3
3770505
245
1 0
$a
Statistical Analysis of Scientific Machine Learning /
$c
Yiping Lu.
264
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2023
300
$a
1 electronic resource (158 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Dissertations Abstracts International, Volume: 85-06, Section: B.
500
$a
Advisors: Ying, Lexing; Mancilla, Jose Blanchet Committee members: Bent, Stacey F.
502
$b
Ph.D.
$c
Stanford University
$d
2023.
520
$a
Massive data collection and computational capabilities have enabled data-driven scientific discoveries and control of engineering systems. However, there are still several questions that should be answered to understand the fundamental limits of just how much can be discovered with data and what is the value of additional information. For example, 1) How can we learn a physics law or economic principle purely from data? 2) How hard is this task, both computationally and statistically? 3) What's the impact on hardness when we add further information (e.g., adding data, model information)? I'll answer these three questions in this thesis in two learning tasks. A key insight in both two cases is that using direct plug-in estimators can result in statistically suboptimal inference.For the first learning task, the thesis focus on variational formulations for differential equation models. I discuss a prototypical Poisson equation. I provide a minimax lower bound for this problem. Based on the lower bounds, I discover that the variance in the direct plug-in estimator makes sample complexity suboptimal. I also consider the optimization dynamic for different variational forms. Finally, based on our theory, I explain an implicit acceleration of using a Sobolev norm as the objective function for training.The second learning task this thesis discuss is (linear) operator learning, which has wide applications in causal inference, time series modeling, and conditional probability learning. I build the first min-max lower bound for this problem. The min-max rate has a particular structure where the more challenging parts of the input and output spaces determine the hardness of learning a linear operator. Analysis also shows that an intuitive discretization of the infinite-dimensional operator could lead to a sub-optimal statistical learning rate. Then, I'll discuss how, by suitably trading-off bias and variance, I can construct an estimator with an optimal learning rate for learning a linear operator between infinite dimension spaces. I also illustrate how this theory can inspire a multilevel machine-learning algorithm of potential practical use.
546
$a
English
590
$a
School code: 0212
650
4
$a
Decision making.
$3
517204
650
4
$a
Neural networks.
$3
677449
690
$a
0800
710
2
$a
Stanford University.
$e
degree granting institution.
$3
3765820
720
1
$a
Ying, Lexing
$e
degree supervisor.
720
1
$a
Mancilla, Jose Blanchet
$e
degree supervisor.
773
0
$t
Dissertations Abstracts International
$g
85-06B.
790
$a
0212
791
$a
Ph.D.
792
$a
2023
856
4 0
$u
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=30726802
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9508808
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入