語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Multiple Learning for Generalized Li...
~
Liu, Xiang.
FindBook
Google Book
Amazon
博客來
Multiple Learning for Generalized Linear Models in Big Data.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Multiple Learning for Generalized Linear Models in Big Data./
作者:
Liu, Xiang.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2021,
面頁冊數:
84 p.
附註:
Source: Dissertations Abstracts International, Volume: 85-01, Section: A.
Contained By:
Dissertations Abstracts International85-01A.
標題:
Independent variables. -
電子資源:
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=30505370
ISBN:
9798379837310
Multiple Learning for Generalized Linear Models in Big Data.
Liu, Xiang.
Multiple Learning for Generalized Linear Models in Big Data.
- Ann Arbor : ProQuest Dissertations & Theses, 2021 - 84 p.
Source: Dissertations Abstracts International, Volume: 85-01, Section: A.
Thesis (Ph.D.)--Purdue University, 2021.
Big data is an enabling technology in digital transformation. It perfectly complements ordinary linear models and generalized linear models, as training well-performed ordinary linear models and generalized linear models require huge amounts of data. With the help of big data, ordinary and generalized linear models can be well-trained and thus offer better services to human beings. However, there are still many challenges to address for training ordinary linear models and generalized linear models in big data. One of the most prominent challenges is the computational challenges. Computational challenges refer to the memory inflation and training inefficiency issues occurred when processing data and training models. Hundreds of algorithms were proposed by the experts to alleviate/overcome the memory inflation issues. However, the solutions obtained are locally optimal solutions. Additionally, most of the proposed algorithms require loading the dataset to RAM many times when updating the model parameters. If multiple model hyper-parameters needed to be computed and compared, e.g. ridge regression, parallel computing techniques are applied in practice. Thus, multiple learning with sufficient statistics arrays are proposed to tackle the memory inflation and training inefficiency issues.
ISBN: 9798379837310Subjects--Topical Terms:
3762849
Independent variables.
Multiple Learning for Generalized Linear Models in Big Data.
LDR
:02424nmm a2200397 4500
001
2399173
005
20240909100742.5
006
m o d
007
cr#unu||||||||
008
251215s2021 ||||||||||||||||| ||eng d
020
$a
9798379837310
035
$a
(MiAaPQ)AAI30505370
035
$a
(MiAaPQ)Purdue17153546
035
$a
AAI30505370
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Liu, Xiang.
$3
1275849
245
1 0
$a
Multiple Learning for Generalized Linear Models in Big Data.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2021
300
$a
84 p.
500
$a
Source: Dissertations Abstracts International, Volume: 85-01, Section: A.
500
$a
Advisor: Yang, Baijian;Zhang, Tonglin.
502
$a
Thesis (Ph.D.)--Purdue University, 2021.
520
$a
Big data is an enabling technology in digital transformation. It perfectly complements ordinary linear models and generalized linear models, as training well-performed ordinary linear models and generalized linear models require huge amounts of data. With the help of big data, ordinary and generalized linear models can be well-trained and thus offer better services to human beings. However, there are still many challenges to address for training ordinary linear models and generalized linear models in big data. One of the most prominent challenges is the computational challenges. Computational challenges refer to the memory inflation and training inefficiency issues occurred when processing data and training models. Hundreds of algorithms were proposed by the experts to alleviate/overcome the memory inflation issues. However, the solutions obtained are locally optimal solutions. Additionally, most of the proposed algorithms require loading the dataset to RAM many times when updating the model parameters. If multiple model hyper-parameters needed to be computed and compared, e.g. ridge regression, parallel computing techniques are applied in practice. Thus, multiple learning with sufficient statistics arrays are proposed to tackle the memory inflation and training inefficiency issues.
590
$a
School code: 0183.
650
4
$a
Independent variables.
$3
3762849
650
4
$a
Mean square errors.
$3
3562318
650
4
$a
Statistics.
$3
517247
650
4
$a
Random access memory.
$3
623617
650
4
$a
Dependent variables.
$3
3696640
650
4
$a
Iterative methods.
$3
3686120
650
4
$a
Optimization techniques.
$3
3681622
650
4
$a
Disk drives.
$3
3769142
650
4
$a
Normal distribution.
$3
3561025
650
4
$a
Data processing.
$3
680224
650
4
$a
Hard disks.
$3
3694282
650
4
$a
University students.
$3
3685836
650
4
$a
Cloud computing.
$3
1016782
650
4
$a
Computer science.
$3
523869
650
4
$a
Higher education.
$3
641065
650
4
$a
Information technology.
$3
532993
650
4
$a
Mathematics.
$3
515831
650
4
$a
Web studies.
$3
2122754
690
$a
0463
690
$a
0800
690
$a
0984
690
$a
0745
690
$a
0489
690
$a
0338
690
$a
0405
690
$a
0646
710
2
$a
Purdue University.
$3
1017663
773
0
$t
Dissertations Abstracts International
$g
85-01A.
790
$a
0183
791
$a
Ph.D.
792
$a
2021
793
$a
English
856
4 0
$u
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=30505370
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9507493
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入