語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
FindBook
Google Book
Amazon
博客來
Learning Approach for Fast Approximate Matrix Factorizations.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Learning Approach for Fast Approximate Matrix Factorizations./
作者:
Yu, Haiyan.
面頁冊數:
1 online resource (53 pages)
附註:
Source: Masters Abstracts International, Volume: 84-01.
Contained By:
Masters Abstracts International84-01.
標題:
Electrical engineering. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=29211918click for full text (PQDT)
ISBN:
9798837550188
Learning Approach for Fast Approximate Matrix Factorizations.
Yu, Haiyan.
Learning Approach for Fast Approximate Matrix Factorizations.
- 1 online resource (53 pages)
Source: Masters Abstracts International, Volume: 84-01.
Thesis (M.S.)--University of Denver, 2022.
Includes bibliographical references
Efficiently computing an (approximate) orthonormal basis and low-rank approximation for the input data X plays a crucial role in data analysis. One of the most efficient algorithms for such tasks is the randomized algorithm, which proceeds by computing a projection XA with a random projection matrix A of much smaller size, and then computing the orthonormal basis as well as low-rank factorizations of the tall matrix XA. While a random matrix A is the de facto choice, in this work, we improve upon its performance by utilizing a learning approach to find an adaptive projection matrix A from a set of training data. We derive a closed-form formulation for the gradient of the training problem, enabling us to use efficient gradient-based algorithms. Experiments show that the learned dense matrix trained by eight different objective functions achieves better performance than a random one. We also extend this approach for learning structured projection matrix, such as the sketching matrix that performs as selecting a few number of representative columns from the input data. Our experiments on both synthetical and real data show that both learned dense and sketch projection matrices outperform the random ones in finding the approximate orthonormal basis and low-rank approximations. We conclude the thesis by discussing possible approaches for generalization analysis.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2023
Mode of access: World Wide Web
ISBN: 9798837550188Subjects--Topical Terms:
649834
Electrical engineering.
Subjects--Index Terms:
Efficiently computingIndex Terms--Genre/Form:
542853
Electronic books.
Learning Approach for Fast Approximate Matrix Factorizations.
LDR
:02665nmm a2200361K 4500
001
2354443
005
20230414084758.5
006
m o d
007
cr mn ---uuuuu
008
241011s2022 xx obm 000 0 eng d
020
$a
9798837550188
035
$a
(MiAaPQ)AAI29211918
035
$a
AAI29211918
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
$d
NTU
100
1
$a
Yu, Haiyan.
$3
3694795
245
1 0
$a
Learning Approach for Fast Approximate Matrix Factorizations.
264
0
$c
2022
300
$a
1 online resource (53 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Masters Abstracts International, Volume: 84-01.
500
$a
Advisor: Zhu, Zhihui.
502
$a
Thesis (M.S.)--University of Denver, 2022.
504
$a
Includes bibliographical references
520
$a
Efficiently computing an (approximate) orthonormal basis and low-rank approximation for the input data X plays a crucial role in data analysis. One of the most efficient algorithms for such tasks is the randomized algorithm, which proceeds by computing a projection XA with a random projection matrix A of much smaller size, and then computing the orthonormal basis as well as low-rank factorizations of the tall matrix XA. While a random matrix A is the de facto choice, in this work, we improve upon its performance by utilizing a learning approach to find an adaptive projection matrix A from a set of training data. We derive a closed-form formulation for the gradient of the training problem, enabling us to use efficient gradient-based algorithms. Experiments show that the learned dense matrix trained by eight different objective functions achieves better performance than a random one. We also extend this approach for learning structured projection matrix, such as the sketching matrix that performs as selecting a few number of representative columns from the input data. Our experiments on both synthetical and real data show that both learned dense and sketch projection matrices outperform the random ones in finding the approximate orthonormal basis and low-rank approximations. We conclude the thesis by discussing possible approaches for generalization analysis.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2023
538
$a
Mode of access: World Wide Web
650
4
$a
Electrical engineering.
$3
649834
653
$a
Efficiently computing
653
$a
Low-rank approximation
653
$a
Approximate orthonormal basis
653
$a
Sketch projection matrices
655
7
$a
Electronic books.
$2
lcsh
$3
542853
690
$a
0544
710
2
$a
ProQuest Information and Learning Co.
$3
783688
710
2
$a
University of Denver.
$b
Electrical Engineering.
$3
2102324
773
0
$t
Masters Abstracts International
$g
84-01.
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=29211918
$z
click for full text (PQDT)
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9476799
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入