語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
到查詢結果
[ null ]
切換:
標籤
|
MARC模式
|
ISBD
Making Reinforcement Learning Practi...
~
Dey, Sourav.
FindBook
Google Book
Amazon
博客來
Making Reinforcement Learning Practical for Building Energy Management.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Making Reinforcement Learning Practical for Building Energy Management./
作者:
Dey, Sourav.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2023,
面頁冊數:
246 p.
附註:
Source: Dissertations Abstracts International, Volume: 85-06, Section: B.
Contained By:
Dissertations Abstracts International85-06B.
標題:
Architectural engineering. -
電子資源:
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=30811904
ISBN:
9798381164954
Making Reinforcement Learning Practical for Building Energy Management.
Dey, Sourav.
Making Reinforcement Learning Practical for Building Energy Management.
- Ann Arbor : ProQuest Dissertations & Theses, 2023 - 246 p.
Source: Dissertations Abstracts International, Volume: 85-06, Section: B.
Thesis (Ph.D.)--University of Colorado at Boulder, 2023.
Conventional building controls rely on pre-programmed logic and fixed control strategies. Thus, they are typically not flexible and often not adaptive to real-time external and internal conditions. Moreover, with the incorporation of emerging technologies, including solar photovoltaics, electric vehicles, smart devices, Internet of Things (IoT) devices and sensors in buildings, operational control objectives are becoming increasingly complex, calling for advanced controls approaches. Reinforcement learning (RL) is an emerging model-free control method that has the ability to handle complex competing objectives and learning to continuously improve performance by repeatedly interacting with the controlled system in its environment. Although RL is an effective control strategy that can self-learn and adapt to changes in an environment without extensive modeling efforts, it can take an unacceptably long time to learn an effective control strategy. Additionally, the RL controller is unstable in its initial interactions, where it has insufficient system behavior knowledge. This process of trial-and-error and long training time make it unsuitable for an RL controller to be applied directly to a building for direct application.The challenges of rapid learning and early unstable behavior were addressed through the evaluation of various methodologies, including the utilization of surrogate models. Strategies such as imitation learning and offline learning from historical rule-based data were juxtaposed against metamodel training, inverse reinforcement learning, and online learning complemented by guided exploration. It was discovered that an synthesis of online learning and imitation learning was adept at decreasing the training duration and curtailing early exploration. This was achieved without the need for detailed and costly building models and could be executed exclusively through data-driven methods. Nonetheless, the computational cost of this method was found to be substantial. Moreover, the challenge of transfer learning in buildings was tackled to foster scalability of data-driven control approaches. A novel method of rule extraction was explored, which was coupled with the capability of a human stakeholder to interpret, engage with, and adapt the learned policies from one trained agent to another, targeting a specific building of interest.
ISBN: 9798381164954Subjects--Topical Terms:
3174102
Architectural engineering.
Subjects--Index Terms:
Conventional building controls
Making Reinforcement Learning Practical for Building Energy Management.
LDR
:03591nmm a2200397 4500
001
2401422
005
20241022112612.5
006
m o d
007
cr#unu||||||||
008
251215s2023 ||||||||||||||||| ||eng d
020
$a
9798381164954
035
$a
(MiAaPQ)AAI30811904
035
$a
AAI30811904
035
$a
2401422
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Dey, Sourav.
$3
3696557
245
1 0
$a
Making Reinforcement Learning Practical for Building Energy Management.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2023
300
$a
246 p.
500
$a
Source: Dissertations Abstracts International, Volume: 85-06, Section: B.
500
$a
Advisor: Henze, Gregor P.
502
$a
Thesis (Ph.D.)--University of Colorado at Boulder, 2023.
520
$a
Conventional building controls rely on pre-programmed logic and fixed control strategies. Thus, they are typically not flexible and often not adaptive to real-time external and internal conditions. Moreover, with the incorporation of emerging technologies, including solar photovoltaics, electric vehicles, smart devices, Internet of Things (IoT) devices and sensors in buildings, operational control objectives are becoming increasingly complex, calling for advanced controls approaches. Reinforcement learning (RL) is an emerging model-free control method that has the ability to handle complex competing objectives and learning to continuously improve performance by repeatedly interacting with the controlled system in its environment. Although RL is an effective control strategy that can self-learn and adapt to changes in an environment without extensive modeling efforts, it can take an unacceptably long time to learn an effective control strategy. Additionally, the RL controller is unstable in its initial interactions, where it has insufficient system behavior knowledge. This process of trial-and-error and long training time make it unsuitable for an RL controller to be applied directly to a building for direct application.The challenges of rapid learning and early unstable behavior were addressed through the evaluation of various methodologies, including the utilization of surrogate models. Strategies such as imitation learning and offline learning from historical rule-based data were juxtaposed against metamodel training, inverse reinforcement learning, and online learning complemented by guided exploration. It was discovered that an synthesis of online learning and imitation learning was adept at decreasing the training duration and curtailing early exploration. This was achieved without the need for detailed and costly building models and could be executed exclusively through data-driven methods. Nonetheless, the computational cost of this method was found to be substantial. Moreover, the challenge of transfer learning in buildings was tackled to foster scalability of data-driven control approaches. A novel method of rule extraction was explored, which was coupled with the capability of a human stakeholder to interpret, engage with, and adapt the learned policies from one trained agent to another, targeting a specific building of interest.
590
$a
School code: 0051.
650
4
$a
Architectural engineering.
$3
3174102
650
4
$a
Computer science.
$3
523869
653
$a
Conventional building controls
653
$a
Machine learning
653
$a
Reinforcement learning
653
$a
Effective control strategy
653
$a
Online learning
690
$a
0800
690
$a
0462
690
$a
0984
710
2
$a
University of Colorado at Boulder.
$b
Civil, Environmental, and Architectural Engineering.
$3
3350108
773
0
$t
Dissertations Abstracts International
$g
85-06B.
790
$a
0051
791
$a
Ph.D.
792
$a
2023
793
$a
English
856
4 0
$u
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=30811904
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9509742
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入
(1)帳號:一般為「身分證號」;外籍生或交換生則為「學號」。 (2)密碼:預設為帳號末四碼。
帳號
.
密碼
.
請在此電腦上記得個人資料
取消
忘記密碼? (請注意!您必須已在系統登記E-mail信箱方能使用。)