語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Towards Fast and Efficient Represent...
~
Li, Hao.
FindBook
Google Book
Amazon
博客來
Towards Fast and Efficient Representation Learning.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Towards Fast and Efficient Representation Learning./
作者:
Li, Hao.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2018,
面頁冊數:
147 p.
附註:
Source: Dissertation Abstracts International, Volume: 80-02(E), Section: B.
Contained By:
Dissertation Abstracts International80-02B(E).
標題:
Computer science. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10845690
ISBN:
9780438402287
Towards Fast and Efficient Representation Learning.
Li, Hao.
Towards Fast and Efficient Representation Learning.
- Ann Arbor : ProQuest Dissertations & Theses, 2018 - 147 p.
Source: Dissertation Abstracts International, Volume: 80-02(E), Section: B.
Thesis (Ph.D.)--University of Maryland, College Park, 2018.
The success of deep learning and convolutional neural networks in many fields is accompanied by a significant increase in the computation cost. With the increasing model complexity and pervasive usage of deep neural networks, there is a surge of interest in fast and efficient model training and inference on both cloud and embedded devices. Meanwhile, understanding the reasons for trainability and generalization is fundamental for its further development. This dissertation explores approaches for fast and efficient representation learning with a better understanding of the trainability and generalization. In particular, we ask following questions and provide our solutions: 1) How to reduce the computation cost for fast inference? 2) How to train low-precision models on resources-constrained devices? 3) What does the loss surface looks like for neural nets and how it affects generalization?
ISBN: 9780438402287Subjects--Topical Terms:
523869
Computer science.
Towards Fast and Efficient Representation Learning.
LDR
:03900nmm a2200337 4500
001
2201325
005
20190429062346.5
008
201008s2018 ||||||||||||||||| ||eng d
020
$a
9780438402287
035
$a
(MiAaPQ)AAI10845690
035
$a
(MiAaPQ)umd:19368
035
$a
AAI10845690
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Li, Hao.
$3
1029766
245
1 0
$a
Towards Fast and Efficient Representation Learning.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2018
300
$a
147 p.
500
$a
Source: Dissertation Abstracts International, Volume: 80-02(E), Section: B.
500
$a
Advisers: Hanan Samet; Thomas Goldstein.
502
$a
Thesis (Ph.D.)--University of Maryland, College Park, 2018.
520
$a
The success of deep learning and convolutional neural networks in many fields is accompanied by a significant increase in the computation cost. With the increasing model complexity and pervasive usage of deep neural networks, there is a surge of interest in fast and efficient model training and inference on both cloud and embedded devices. Meanwhile, understanding the reasons for trainability and generalization is fundamental for its further development. This dissertation explores approaches for fast and efficient representation learning with a better understanding of the trainability and generalization. In particular, we ask following questions and provide our solutions: 1) How to reduce the computation cost for fast inference? 2) How to train low-precision models on resources-constrained devices? 3) What does the loss surface looks like for neural nets and how it affects generalization?
520
$a
To reduce the computation cost for fast inference, we propose to prune filters from CNNs that are identified as having a small effect on the prediction accuracy. By removing filters with small norms together with their connected feature maps, the computation cost can be reduced accordingly without using special software or hardware. We show that simple filter pruning approach can reduce the inference cost while regaining close to the original accuracy by retraining the networks.
520
$a
To further reduce the inference cost, quantizing model parameters with low-precision representations has shown significant speedup, especially for edge devices that have limited computing resources, memory capacity, and power consumption. To enable on-device learning on lower-power systems, removing the dependency of full-precision model during training is the key challenge. We study various quantized training methods with the goal of understanding the differences in behavior, and reasons for success or failure. We address the issue of why algorithms that maintain floating-point representations work so well, while fully quantized training methods stall before training is complete. We show that training algorithms that exploit high-precision representations have an important greedy search phase that purely quantized training methods lack, which explains the difficulty of training using low-precision arithmetic.
520
$a
Finally, we explore the structure of neural loss functions, and the effect of loss landscapes on generalization, using a range of visualization methods. We introduce a simple filter normalization method that helps us visualize loss function curvature, and make meaningful side-by-side comparisons between loss functions. The sharpness of minimizers correlates well with generalization error when this visualization is used. Then, using a variety of visualizations, we explore how training hyper-parameters affect the shape of minimizers, and how network architecture affects the loss landscape.
590
$a
School code: 0117.
650
4
$a
Computer science.
$3
523869
650
4
$a
Artificial intelligence.
$3
516317
690
$a
0984
690
$a
0800
710
2
$a
University of Maryland, College Park.
$b
Computer Science.
$3
1018451
773
0
$t
Dissertation Abstracts International
$g
80-02B(E).
790
$a
0117
791
$a
Ph.D.
792
$a
2018
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10845690
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9377874
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入