語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Dynamic Sparsity for Efficient Machi...
~
Liu, Zichang.
FindBook
Google Book
Amazon
博客來
Dynamic Sparsity for Efficient Machine Learning.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Dynamic Sparsity for Efficient Machine Learning./
作者:
Liu, Zichang.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2024,
面頁冊數:
145 p.
附註:
Source: Dissertations Abstracts International, Volume: 86-02, Section: B.
Contained By:
Dissertations Abstracts International86-02B.
標題:
Computer science. -
電子資源:
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=31532859
ISBN:
9798383417386
Dynamic Sparsity for Efficient Machine Learning.
Liu, Zichang.
Dynamic Sparsity for Efficient Machine Learning.
- Ann Arbor : ProQuest Dissertations & Theses, 2024 - 145 p.
Source: Dissertations Abstracts International, Volume: 86-02, Section: B.
Thesis (Ph.D.)--Rice University, 2024.
Over the past decades, machine learning (ML) models have delivered remarkable accomplishments in various applications. For example, large language models usher in a new wave of excitement in artificial intelligence. Interestingly, these accomplishments also unveil the scaling law in machine learning: larger models, equipped with more parameters and trained on more extensive datasets, often significantly outperform their smaller counterparts. However, the trends of increasing model size inevitably introduce unprecedented computation resource requirements, creating substantial challenges in model training and deployments.This thesis aims to improve the efficiency of ML models through algorithmic advancements. Specifically, we exploit the dynamic sparsity pattern inside ML models to achieve efficiency goals. Dynamic sparsity refers to the subset of parameters or activations that are important for a certain data, and different data may have a different dynamic sparsity pattern. We advocate identifying the dynamic sparsity pattern for each data set and focusing computation and memory resources on it.The first part of this thesis centers around the inference stage. We verify the existence of dynamic sparsity in trained ML models, namely, within the classification layer, attention mechanism, and transformer layers of trained models. Further, we demonstrate that such dynamic sparsity can be cheaply predicted and leveraged for each data to improve the inference efficiency goals. The subsequent part of the dissertation will shift its focus to the training stage, where dynamic sparsity emerges as a tool to mitigate the problem of catastrophic forgetting or data heterogeneity in federated learning to improve training efficiency.
ISBN: 9798383417386Subjects--Topical Terms:
523869
Computer science.
Subjects--Index Terms:
Machine learning
Dynamic Sparsity for Efficient Machine Learning.
LDR
:02889nmm a2200397 4500
001
2402282
005
20241028051517.5
006
m o d
007
cr#unu||||||||
008
251215s2024 ||||||||||||||||| ||eng d
020
$a
9798383417386
035
$a
(MiAaPQ)AAI31532859
035
$a
(MiAaPQ)0187rice5035Liu
035
$a
AAI31532859
035
$a
2402282
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Liu, Zichang.
$3
3772504
245
1 0
$a
Dynamic Sparsity for Efficient Machine Learning.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2024
300
$a
145 p.
500
$a
Source: Dissertations Abstracts International, Volume: 86-02, Section: B.
500
$a
Advisor: Shrivastava, Anshumali.
502
$a
Thesis (Ph.D.)--Rice University, 2024.
520
$a
Over the past decades, machine learning (ML) models have delivered remarkable accomplishments in various applications. For example, large language models usher in a new wave of excitement in artificial intelligence. Interestingly, these accomplishments also unveil the scaling law in machine learning: larger models, equipped with more parameters and trained on more extensive datasets, often significantly outperform their smaller counterparts. However, the trends of increasing model size inevitably introduce unprecedented computation resource requirements, creating substantial challenges in model training and deployments.This thesis aims to improve the efficiency of ML models through algorithmic advancements. Specifically, we exploit the dynamic sparsity pattern inside ML models to achieve efficiency goals. Dynamic sparsity refers to the subset of parameters or activations that are important for a certain data, and different data may have a different dynamic sparsity pattern. We advocate identifying the dynamic sparsity pattern for each data set and focusing computation and memory resources on it.The first part of this thesis centers around the inference stage. We verify the existence of dynamic sparsity in trained ML models, namely, within the classification layer, attention mechanism, and transformer layers of trained models. Further, we demonstrate that such dynamic sparsity can be cheaply predicted and leveraged for each data to improve the inference efficiency goals. The subsequent part of the dissertation will shift its focus to the training stage, where dynamic sparsity emerges as a tool to mitigate the problem of catastrophic forgetting or data heterogeneity in federated learning to improve training efficiency.
590
$a
School code: 0187.
650
4
$a
Computer science.
$3
523869
650
4
$a
Systems science.
$3
3168411
653
$a
Machine learning
653
$a
Large language model
653
$a
Sparsity
653
$a
ML models
690
$a
0984
690
$a
0800
690
$a
0790
710
2
$a
Rice University.
$b
Computer Science.
$3
3345731
773
0
$t
Dissertations Abstracts International
$g
86-02B.
790
$a
0187
791
$a
Ph.D.
792
$a
2024
793
$a
English
856
4 0
$u
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=31532859
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9510602
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入