語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Towards Efficient Deep Learning in C...
~
Wang, Huan.
FindBook
Google Book
Amazon
博客來
Towards Efficient Deep Learning in Computer Vision via Network Sparsity and Distillation.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Towards Efficient Deep Learning in Computer Vision via Network Sparsity and Distillation./
作者:
Wang, Huan.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2024,
面頁冊數:
251 p.
附註:
Source: Dissertations Abstracts International, Volume: 85-10, Section: B.
Contained By:
Dissertations Abstracts International85-10B.
標題:
Computer engineering. -
電子資源:
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=31241458
ISBN:
9798382318530
Towards Efficient Deep Learning in Computer Vision via Network Sparsity and Distillation.
Wang, Huan.
Towards Efficient Deep Learning in Computer Vision via Network Sparsity and Distillation.
- Ann Arbor : ProQuest Dissertations & Theses, 2024 - 251 p.
Source: Dissertations Abstracts International, Volume: 85-10, Section: B.
Thesis (Ph.D.)--Northeastern University, 2024.
Artificial intelligence (AI) empowered by deep learning, has been profoundly transforming the world. However, the excessive size of these models remains a central obstacle that limits their broader utility. Modern neural networks commonly consist of millions of parameters, with foundation models extending to billions. The rapid expansion in model size introduces many challenges including training cost, sluggish inference speed, excessive energy consumption, and negative environmental implications such as increased CO2 emissions.Addressing these challenges necessitates the adoption of efficient deep learning (EDL). The dissertation focuses on two overarching approaches, network sparsity (a.k.a. pruning) and knowledge distillation, to enhance the efficiency of deep learning models in the context of computer vision. Network pruning focuses on eliminating redundant parameters in a model while preserving the performance. Knowledge distillation aims to enhance the performance of the target model, referred to as the "student", by leveraging guidance from a stronger model, known as the "teacher". This approach leads to performance improvements in the target model without reducing its size.In this dissertation, I will start with the background and motivation for more efficient deep learning models in the past several years in the context of the arising foundation models. Then, the basic concepts, goals, and challenges of EDL will be introduced along with the major sub-methods. After that, the major part of this dissertation will be dedicated to elaborating on the proposed efficiency algorithms based on pruning and distillation in a variety of applications.For the pruning part, the dissertation first presents an effective pruning algorithm GReg in image classification, by tapping into a growing regularization strategy. Then, in order to understand the real progress of network pruning, a fairness principle is introduced to fairly compare different pruning methods. The investigation leads us to the central role of network trainability in pruning, which has been largely overlooked by prior works. A trainability-preserving pruning approach, TPP, is then proposed to show the merits of maintaining trainability during pruning. A short survey on an emerging new pruning paradigm, pruning at initialization, is then presented to discuss its potential and the connections with the conventional pruning after training. The GReg algorithm is further extended to a low-level vision task, single image super-resolution (SR), to explore the difference of utilizing pruning in low-level vision (SR) vs. high-level vision (image classification). Three efficient SR approaches (ASSL, GASSL,SRP) are introduced.For the distillation part, the dissertation first focuses on the interaction between knowledge distillation and data augmentation in image classification, a proved proposition presented to rigorously understand what defines the "goodness" of a data augmentation scheme in distillation. Next, the dissertation showcases how to employ distillation to significantly improve the inference efficiency for novel view synthesis in 3D vision. Both static scenes and dynamic scenes are considered. Finally, SnapFusion is presented to demonstrate a systematic efficiency optimization of deep models by jointly utilizing pruning and distillation, towards an unprecedentedly fast speed of text-to-image generation based on diffusion models.Finally, a comprehensive summary along with takeaways and outlooks of the future work will conclude the dissertation. Major takeaways include (1) there is no panacea towards efficient deep learning for all tasks; solution is usually case-by-case; (2) there is a clear trend that the efficiency solution for future models (especially the large models) will feature a systematical optimization and co-design in many axes (e.g., hardware, system, and algorithm); (3) profiling is always a good start to understand the problem so as to build the right efficiency portfolio.
ISBN: 9798382318530Subjects--Topical Terms:
621879
Computer engineering.
Subjects--Index Terms:
Computer vision
Towards Efficient Deep Learning in Computer Vision via Network Sparsity and Distillation.
LDR
:05211nmm a2200397 4500
001
2401477
005
20241022112625.5
006
m o d
007
cr#unu||||||||
008
251215s2024 ||||||||||||||||| ||eng d
020
$a
9798382318530
035
$a
(MiAaPQ)AAI31241458
035
$a
AAI31241458
035
$a
2401477
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Wang, Huan.
$3
1678751
245
1 0
$a
Towards Efficient Deep Learning in Computer Vision via Network Sparsity and Distillation.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2024
300
$a
251 p.
500
$a
Source: Dissertations Abstracts International, Volume: 85-10, Section: B.
500
$a
Advisor: Fu, Yun.
502
$a
Thesis (Ph.D.)--Northeastern University, 2024.
520
$a
Artificial intelligence (AI) empowered by deep learning, has been profoundly transforming the world. However, the excessive size of these models remains a central obstacle that limits their broader utility. Modern neural networks commonly consist of millions of parameters, with foundation models extending to billions. The rapid expansion in model size introduces many challenges including training cost, sluggish inference speed, excessive energy consumption, and negative environmental implications such as increased CO2 emissions.Addressing these challenges necessitates the adoption of efficient deep learning (EDL). The dissertation focuses on two overarching approaches, network sparsity (a.k.a. pruning) and knowledge distillation, to enhance the efficiency of deep learning models in the context of computer vision. Network pruning focuses on eliminating redundant parameters in a model while preserving the performance. Knowledge distillation aims to enhance the performance of the target model, referred to as the "student", by leveraging guidance from a stronger model, known as the "teacher". This approach leads to performance improvements in the target model without reducing its size.In this dissertation, I will start with the background and motivation for more efficient deep learning models in the past several years in the context of the arising foundation models. Then, the basic concepts, goals, and challenges of EDL will be introduced along with the major sub-methods. After that, the major part of this dissertation will be dedicated to elaborating on the proposed efficiency algorithms based on pruning and distillation in a variety of applications.For the pruning part, the dissertation first presents an effective pruning algorithm GReg in image classification, by tapping into a growing regularization strategy. Then, in order to understand the real progress of network pruning, a fairness principle is introduced to fairly compare different pruning methods. The investigation leads us to the central role of network trainability in pruning, which has been largely overlooked by prior works. A trainability-preserving pruning approach, TPP, is then proposed to show the merits of maintaining trainability during pruning. A short survey on an emerging new pruning paradigm, pruning at initialization, is then presented to discuss its potential and the connections with the conventional pruning after training. The GReg algorithm is further extended to a low-level vision task, single image super-resolution (SR), to explore the difference of utilizing pruning in low-level vision (SR) vs. high-level vision (image classification). Three efficient SR approaches (ASSL, GASSL,SRP) are introduced.For the distillation part, the dissertation first focuses on the interaction between knowledge distillation and data augmentation in image classification, a proved proposition presented to rigorously understand what defines the "goodness" of a data augmentation scheme in distillation. Next, the dissertation showcases how to employ distillation to significantly improve the inference efficiency for novel view synthesis in 3D vision. Both static scenes and dynamic scenes are considered. Finally, SnapFusion is presented to demonstrate a systematic efficiency optimization of deep models by jointly utilizing pruning and distillation, towards an unprecedentedly fast speed of text-to-image generation based on diffusion models.Finally, a comprehensive summary along with takeaways and outlooks of the future work will conclude the dissertation. Major takeaways include (1) there is no panacea towards efficient deep learning for all tasks; solution is usually case-by-case; (2) there is a clear trend that the efficiency solution for future models (especially the large models) will feature a systematical optimization and co-design in many axes (e.g., hardware, system, and algorithm); (3) profiling is always a good start to understand the problem so as to build the right efficiency portfolio.
590
$a
School code: 0160.
650
4
$a
Computer engineering.
$3
621879
650
4
$a
Computer science.
$3
523869
653
$a
Computer vision
653
$a
Efficient deep learning
653
$a
Knowledge distillation
653
$a
Network pruning
653
$a
Network sparsity
690
$a
0464
690
$a
0800
690
$a
0984
710
2
$a
Northeastern University.
$b
Electrical and Computer Engineering.
$3
1018491
773
0
$t
Dissertations Abstracts International
$g
85-10B.
790
$a
0160
791
$a
Ph.D.
792
$a
2024
793
$a
English
856
4 0
$u
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=31241458
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9509797
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入