Language:
English
繁體中文
Help
回圖書館首頁
手機版館藏查詢
Login
Back
Switch To:
Labeled
|
MARC Mode
|
ISBD
Linked to FindBook
Google Book
Amazon
博客來
Learning Approach for Fast Approximate Matrix Factorizations.
Record Type:
Electronic resources : Monograph/item
Title/Author:
Learning Approach for Fast Approximate Matrix Factorizations./
Author:
Yu, Haiyan.
Description:
1 online resource (53 pages)
Notes:
Source: Masters Abstracts International, Volume: 84-01.
Contained By:
Masters Abstracts International84-01.
Subject:
Electrical engineering. -
Online resource:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=29211918click for full text (PQDT)
ISBN:
9798837550188
Learning Approach for Fast Approximate Matrix Factorizations.
Yu, Haiyan.
Learning Approach for Fast Approximate Matrix Factorizations.
- 1 online resource (53 pages)
Source: Masters Abstracts International, Volume: 84-01.
Thesis (M.S.)--University of Denver, 2022.
Includes bibliographical references
Efficiently computing an (approximate) orthonormal basis and low-rank approximation for the input data X plays a crucial role in data analysis. One of the most efficient algorithms for such tasks is the randomized algorithm, which proceeds by computing a projection XA with a random projection matrix A of much smaller size, and then computing the orthonormal basis as well as low-rank factorizations of the tall matrix XA. While a random matrix A is the de facto choice, in this work, we improve upon its performance by utilizing a learning approach to find an adaptive projection matrix A from a set of training data. We derive a closed-form formulation for the gradient of the training problem, enabling us to use efficient gradient-based algorithms. Experiments show that the learned dense matrix trained by eight different objective functions achieves better performance than a random one. We also extend this approach for learning structured projection matrix, such as the sketching matrix that performs as selecting a few number of representative columns from the input data. Our experiments on both synthetical and real data show that both learned dense and sketch projection matrices outperform the random ones in finding the approximate orthonormal basis and low-rank approximations. We conclude the thesis by discussing possible approaches for generalization analysis.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2023
Mode of access: World Wide Web
ISBN: 9798837550188Subjects--Topical Terms:
649834
Electrical engineering.
Subjects--Index Terms:
Efficiently computingIndex Terms--Genre/Form:
542853
Electronic books.
Learning Approach for Fast Approximate Matrix Factorizations.
LDR
:02665nmm a2200361K 4500
001
2354443
005
20230414084758.5
006
m o d
007
cr mn ---uuuuu
008
241011s2022 xx obm 000 0 eng d
020
$a
9798837550188
035
$a
(MiAaPQ)AAI29211918
035
$a
AAI29211918
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
$d
NTU
100
1
$a
Yu, Haiyan.
$3
3694795
245
1 0
$a
Learning Approach for Fast Approximate Matrix Factorizations.
264
0
$c
2022
300
$a
1 online resource (53 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Masters Abstracts International, Volume: 84-01.
500
$a
Advisor: Zhu, Zhihui.
502
$a
Thesis (M.S.)--University of Denver, 2022.
504
$a
Includes bibliographical references
520
$a
Efficiently computing an (approximate) orthonormal basis and low-rank approximation for the input data X plays a crucial role in data analysis. One of the most efficient algorithms for such tasks is the randomized algorithm, which proceeds by computing a projection XA with a random projection matrix A of much smaller size, and then computing the orthonormal basis as well as low-rank factorizations of the tall matrix XA. While a random matrix A is the de facto choice, in this work, we improve upon its performance by utilizing a learning approach to find an adaptive projection matrix A from a set of training data. We derive a closed-form formulation for the gradient of the training problem, enabling us to use efficient gradient-based algorithms. Experiments show that the learned dense matrix trained by eight different objective functions achieves better performance than a random one. We also extend this approach for learning structured projection matrix, such as the sketching matrix that performs as selecting a few number of representative columns from the input data. Our experiments on both synthetical and real data show that both learned dense and sketch projection matrices outperform the random ones in finding the approximate orthonormal basis and low-rank approximations. We conclude the thesis by discussing possible approaches for generalization analysis.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2023
538
$a
Mode of access: World Wide Web
650
4
$a
Electrical engineering.
$3
649834
653
$a
Efficiently computing
653
$a
Low-rank approximation
653
$a
Approximate orthonormal basis
653
$a
Sketch projection matrices
655
7
$a
Electronic books.
$2
lcsh
$3
542853
690
$a
0544
710
2
$a
ProQuest Information and Learning Co.
$3
783688
710
2
$a
University of Denver.
$b
Electrical Engineering.
$3
2102324
773
0
$t
Masters Abstracts International
$g
84-01.
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=29211918
$z
click for full text (PQDT)
based on 0 review(s)
Location:
ALL
電子資源
Year:
Volume Number:
Items
1 records • Pages 1 •
1
Inventory Number
Location Name
Item Class
Material type
Call number
Usage Class
Loan Status
No. of reservations
Opac note
Attachments
W9476799
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
On shelf
0
1 records • Pages 1 •
1
Multimedia
Reviews
Add a review
and share your thoughts with other readers
Export
pickup library
Processing
...
Change password
Login