Language:
English
繁體中文
Help
回圖書館首頁
手機版館藏查詢
Login
Back
Switch To:
Labeled
|
MARC Mode
|
ISBD
Orchestrating the compiler and micro...
~
Hu, Jie.
Linked to FindBook
Google Book
Amazon
博客來
Orchestrating the compiler and microarchitecture for reducing cache energy.
Record Type:
Electronic resources : Monograph/item
Title/Author:
Orchestrating the compiler and microarchitecture for reducing cache energy./
Author:
Hu, Jie.
Description:
170 p.
Notes:
Source: Dissertation Abstracts International, Volume: 65-09, Section: B, page: 4665.
Contained By:
Dissertation Abstracts International65-09B.
Subject:
Computer Science. -
Online resource:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3148653
ISBN:
0496070991
Orchestrating the compiler and microarchitecture for reducing cache energy.
Hu, Jie.
Orchestrating the compiler and microarchitecture for reducing cache energy.
- 170 p.
Source: Dissertation Abstracts International, Volume: 65-09, Section: B, page: 4665.
Thesis (Ph.D.)--The Pennsylvania State University, 2004.
Cache memories are widely employed in modern microprocessor designs to bridge the increasing speed gap between the processor and the off chip main memory, which imposes the major performance bottleneck in computer systems. Consequently, caches consume a significant amount of the transistor budget and chip die area in microprocessors employed in both low-end embedded systems and high-end server systems. Being a major consumer of on-chip transistors, thus also of the power budget, cache memory deserves a new and complete study of its performance and energy behavior and new techniques for designing cache memories for next generation microprocessors.
ISBN: 0496070991Subjects--Topical Terms:
626642
Computer Science.
Orchestrating the compiler and microarchitecture for reducing cache energy.
LDR
:03126nmm 2200277 4500
001
1846870
005
20051103093556.5
008
130614s2004 eng d
020
$a
0496070991
035
$a
(UnM)AAI3148653
035
$a
AAI3148653
040
$a
UnM
$c
UnM
100
1
$a
Hu, Jie.
$3
1934962
245
1 0
$a
Orchestrating the compiler and microarchitecture for reducing cache energy.
300
$a
170 p.
500
$a
Source: Dissertation Abstracts International, Volume: 65-09, Section: B, page: 4665.
500
$a
Adviser: Vijaykrishnan Narayanan.
502
$a
Thesis (Ph.D.)--The Pennsylvania State University, 2004.
520
$a
Cache memories are widely employed in modern microprocessor designs to bridge the increasing speed gap between the processor and the off chip main memory, which imposes the major performance bottleneck in computer systems. Consequently, caches consume a significant amount of the transistor budget and chip die area in microprocessors employed in both low-end embedded systems and high-end server systems. Being a major consumer of on-chip transistors, thus also of the power budget, cache memory deserves a new and complete study of its performance and energy behavior and new techniques for designing cache memories for next generation microprocessors.
520
$a
This thesis focuses on developing compiler and microarchitecture techniques for designing energy-efficient caches, targeting both dynamic and leakage energy. This thesis has made four major contributions towards energy efficient cache architectures. First, a detailed cache behavior characterization for both array-based embedded applications and general-purpose applications was performed. The insights obtained from this study suggest that (1) different applications or different code segments within a single application have very different cache demands in the context of performance and energy concerns, (2) program execution footprints (instruction addresses) can be highly predictable and usually have a narrow scope during a particular execution phase, especially for embedded applications, (3) high sequentiality is presented in accesses to the instruction cache. Second, a technique called compiler-directed cache polymorphism (CDCP) was proposed. CDCP is used to analyze the data reuse exhibited by loop nests, and thus to extract the cache demands and determine the best data cache configurations for different code segments to achieve the best performance and energy behavior. Third, this thesis presents a redesigned processor datapath to capture and utilize the predictable execution footprint for reducing energy consumption in instruction caches. Finally, this thesis work addresses the increasing leakage concern in the instruction cache by exploiting cache hotspots during phase execution and the sequentiality exhibited in execution footprint.
590
$a
School code: 0176.
650
4
$a
Computer Science.
$3
626642
690
$a
0984
710
2 0
$a
The Pennsylvania State University.
$3
699896
773
0
$t
Dissertation Abstracts International
$g
65-09B.
790
1 0
$a
Narayanan, Vijaykrishnan,
$e
advisor
790
$a
0176
791
$a
Ph.D.
792
$a
2004
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3148653
based on 0 review(s)
Location:
ALL
電子資源
Year:
Volume Number:
Items
1 records • Pages 1 •
1
Inventory Number
Location Name
Item Class
Material type
Call number
Usage Class
Loan Status
No. of reservations
Opac note
Attachments
W9196384
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
On shelf
0
1 records • Pages 1 •
1
Multimedia
Reviews
Add a review
and share your thoughts with other readers
Export
pickup library
Processing
...
Change password
Login