語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
FindBook
Google Book
Amazon
博客來
Energy-Efficient Implementation of Machine Learning Algorithms.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Energy-Efficient Implementation of Machine Learning Algorithms./
作者:
Lu, Jie.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2021,
面頁冊數:
148 p.
附註:
Source: Dissertations Abstracts International, Volume: 83-04, Section: B.
Contained By:
Dissertations Abstracts International83-04B.
標題:
Electrical engineering. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28643572
ISBN:
9798471106895
Energy-Efficient Implementation of Machine Learning Algorithms.
Lu, Jie.
Energy-Efficient Implementation of Machine Learning Algorithms.
- Ann Arbor : ProQuest Dissertations & Theses, 2021 - 148 p.
Source: Dissertations Abstracts International, Volume: 83-04, Section: B.
Thesis (Ph.D.)--Princeton University, 2021.
This item must not be sold to any third party vendors.
Pattern-recognition algorithms from the domain of machine learning play a prominent role in embedded sensing systems, in order to derive inferences from sensor data. Very often, such systems face severe energy constraints. The focus of this thesis is on mitigating the energy required for computation, communication, and storage by exploiting various forms of computation algorithms. In the first part of our work, we focus on reducing the computations necessary during linear signal-processing. In order to achieve computation reduction, we consider random projection. Random projection is a form of compression that preserves a similarity metric widely used for pattern recognition is used for computational energy reduction. The form of compression is random projection, and the similarity metric is inner product between source vectors. Given the prominence of random projections within compressive sensing, previous research has explored this idea for application to compressively-sensed signals. We show that random projections can be exploited more generally without compressive sensing, enabling significant reduction in computational energy and avoiding a significant source of error. The approach is referred to as compressed signal processing (CSP). It applies to Nyquist-sampled signals.The second part of our work focuses on dealing with signal processing that may not be linear. We look into approximate computing and its potential as an algorithmic approach to reducing energy. Approximate computing is a broad approach that has recently received considerable attention in the context of inference systems. This stems from the observation that many inference systems exhibit various forms of tolerance to data noise. While some systems have demonstrated significant approximation-vs.-energy knobs to exploit this, they have been applicable to specific kernels and architectures; the more generally available knobs have been relatively weak, resulting in large data noise for relatively modest energy savings (e.g., voltage overscaling, bit precision scaling). In this work, we explore the use of genetic programming (GP) to compute approximate features. Further, we leverage a method that enhances tolerance to feature-data noise through directed retraining of the inference stage. Previous work in GP has shown that it generalizes well to enable approximation of a broad range of computations, raising the potential for broad applicability of the proposed approach. The focus on feature extraction is deliberate because it involves diverse, often highly nonlinear, operations, challenging general applicability of energy-reducing approaches.The third part of our work takes into consideration multi-task algorithms. By exploiting the concept of transfer learning and energy-efficient data show accelerators, we show that the use of convolutional autoencoders can enable various levels of reduction in computational energy and avoid a significant reduction in inference performance when multiple task categories are targeted for obtaining an inference. In order to minimize inference computational energy, a convolutional autoencoder is used for learning a generalized representation of inputs. We consider three scenarios: transferring layers using convolutional autoencoders, transferring layers using convolutional neural networks trained on different tasks, and no layer transfer. We also take into account the performance when transferring only convolutional layers versus when transferring convolutional layers and a fully connected layer.We study our methodologies through validations of their generalizability and through applications using clinical and image data.
ISBN: 9798471106895Subjects--Topical Terms:
649834
Electrical engineering.
Subjects--Index Terms:
Approximate learning
Energy-Efficient Implementation of Machine Learning Algorithms.
LDR
:04978nmm a2200421 4500
001
2349580
005
20230509091118.5
006
m o d
007
cr#unu||||||||
008
241004s2021 ||||||||||||||||| ||eng d
020
$a
9798471106895
035
$a
(MiAaPQ)AAI28643572
035
$a
AAI28643572
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Lu, Jie.
$3
1250979
245
1 0
$a
Energy-Efficient Implementation of Machine Learning Algorithms.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2021
300
$a
148 p.
500
$a
Source: Dissertations Abstracts International, Volume: 83-04, Section: B.
500
$a
Advisor: Jha, Niraj K;Verma, Naveen.
502
$a
Thesis (Ph.D.)--Princeton University, 2021.
506
$a
This item must not be sold to any third party vendors.
520
$a
Pattern-recognition algorithms from the domain of machine learning play a prominent role in embedded sensing systems, in order to derive inferences from sensor data. Very often, such systems face severe energy constraints. The focus of this thesis is on mitigating the energy required for computation, communication, and storage by exploiting various forms of computation algorithms. In the first part of our work, we focus on reducing the computations necessary during linear signal-processing. In order to achieve computation reduction, we consider random projection. Random projection is a form of compression that preserves a similarity metric widely used for pattern recognition is used for computational energy reduction. The form of compression is random projection, and the similarity metric is inner product between source vectors. Given the prominence of random projections within compressive sensing, previous research has explored this idea for application to compressively-sensed signals. We show that random projections can be exploited more generally without compressive sensing, enabling significant reduction in computational energy and avoiding a significant source of error. The approach is referred to as compressed signal processing (CSP). It applies to Nyquist-sampled signals.The second part of our work focuses on dealing with signal processing that may not be linear. We look into approximate computing and its potential as an algorithmic approach to reducing energy. Approximate computing is a broad approach that has recently received considerable attention in the context of inference systems. This stems from the observation that many inference systems exhibit various forms of tolerance to data noise. While some systems have demonstrated significant approximation-vs.-energy knobs to exploit this, they have been applicable to specific kernels and architectures; the more generally available knobs have been relatively weak, resulting in large data noise for relatively modest energy savings (e.g., voltage overscaling, bit precision scaling). In this work, we explore the use of genetic programming (GP) to compute approximate features. Further, we leverage a method that enhances tolerance to feature-data noise through directed retraining of the inference stage. Previous work in GP has shown that it generalizes well to enable approximation of a broad range of computations, raising the potential for broad applicability of the proposed approach. The focus on feature extraction is deliberate because it involves diverse, often highly nonlinear, operations, challenging general applicability of energy-reducing approaches.The third part of our work takes into consideration multi-task algorithms. By exploiting the concept of transfer learning and energy-efficient data show accelerators, we show that the use of convolutional autoencoders can enable various levels of reduction in computational energy and avoid a significant reduction in inference performance when multiple task categories are targeted for obtaining an inference. In order to minimize inference computational energy, a convolutional autoencoder is used for learning a generalized representation of inputs. We consider three scenarios: transferring layers using convolutional autoencoders, transferring layers using convolutional neural networks trained on different tasks, and no layer transfer. We also take into account the performance when transferring only convolutional layers versus when transferring convolutional layers and a fully connected layer.We study our methodologies through validations of their generalizability and through applications using clinical and image data.
590
$a
School code: 0181.
650
4
$a
Electrical engineering.
$3
649834
650
4
$a
Computer engineering.
$3
621879
650
4
$a
Applied mathematics.
$3
2122814
650
4
$a
Artificial intelligence.
$3
516317
653
$a
Approximate learning
653
$a
Convolutional neural networks
653
$a
Energy reduction
653
$a
Machine learning
653
$a
Multi-task images
653
$a
Transfer learning
690
$a
0544
690
$a
0464
690
$a
0800
690
$a
0364
710
2
$a
Princeton University.
$b
Electrical Engineering.
$3
2095953
773
0
$t
Dissertations Abstracts International
$g
83-04B.
790
$a
0181
791
$a
Ph.D.
792
$a
2021
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28643572
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9472018
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入