語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
FindBook
Google Book
Amazon
博客來
Reinforcement Learning in Eco-Driving for Connected and Automated Vehicles.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Reinforcement Learning in Eco-Driving for Connected and Automated Vehicles./
作者:
Zhu, Zhaoxuan.
面頁冊數:
1 online resource (155 pages)
附註:
Source: Dissertations Abstracts International, Volume: 84-01, Section: B.
Contained By:
Dissertations Abstracts International84-01B.
標題:
Automotive engineering. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=29310787click for full text (PQDT)
ISBN:
9798834019756
Reinforcement Learning in Eco-Driving for Connected and Automated Vehicles.
Zhu, Zhaoxuan.
Reinforcement Learning in Eco-Driving for Connected and Automated Vehicles.
- 1 online resource (155 pages)
Source: Dissertations Abstracts International, Volume: 84-01, Section: B.
Thesis (Ph.D.)--The Ohio State University, 2021.
Includes bibliographical references
Connected and Automated Vehicles (CAVs) can significantly improve transportation efficiency by taking advantage of advanced connectivity technologies. Meanwhile, the combination of CAVs and powertrain electrification, such as Hybrid Electric Vehicles (HEVs) and Plug-in Hybrid Electric Vehicles (PHEVs), offers greater potential to improve fuel economy due to the extra control flexibility compared to vehicles with a single power source. In this context, the eco-driving control optimization problem seeks to design the optimal speed and powertrain components usage profiles based upon the information received by advanced mapping or Vehicle-to-Everything (V2X) communications to minimize the energy consumed by the vehicle over a given itinerary. To overcome the real-time computational complexity and embrace the stochastic nature of the driving task, the application and extension of state-of-the-art (SOTA) Deep Reinforcement Learning (Deep RL, DRL) algorithms to the eco-driving problem for a mild-HEV is studied in this dissertation.For better training and a more comprehensive evaluation, an RL environment, consisting of a mild HEV powertrain and vehicle dynamics model and a large-scale microscopic traffic simulator, is developed. To benchmark the performance of the developed strategies, two causal controllers, namely a baseline strategy representing human drivers and a deterministic optimal-control-based strategy, and the non-causal wait-and-see solution are implemented.In the first RL application, the eco-driving problem is formulated as a Partially Observable Markov Decision Process, and a SOTA model-free DRL (MFDRL) algorithm, Proximal Policy Optimization with Long Short-term Memory as function approximator, is used. Evaluated over 100 trips randomly generated in the city of Columbus, OH, the MFDRL agent shows a 17% fuel economy improvement against the baseline strategy while keeping the average travel time comparable. While showing performance comparable to the optimal-control-based strategy, the actor of the MFDRL agent offers an explicit control policy that significantly reduces the onboard computation.Subsequently, a model-based DRL (MBDRL) algorithm, Safe Model-based Off-policy Reinforcement Learning (SMORL) is proposed. The algorithm addresses the following issues emerged from the MFDRL development: a) the cumbersome process necessary to design the rewarding mechanism, b) the lack of the constraint satisfaction and feasibility guarantee and c) the low sample efficiency. Specifically, SMORL consists of three key components, a massively parallelizable dynamic programming trajectory optimizer, a value function learned in an off-policy fashion and a learned safe set as a generative model. Evaluated under the same conditions, the SMORL agent shows a 21% reduction on the fuel consumption over the baseline and the dominant performance over the MFDRL agent and the deterministic optimal-control-based controller.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2023
Mode of access: World Wide Web
ISBN: 9798834019756Subjects--Topical Terms:
2181195
Automotive engineering.
Subjects--Index Terms:
Connected and Automated VehiclesIndex Terms--Genre/Form:
542853
Electronic books.
Reinforcement Learning in Eco-Driving for Connected and Automated Vehicles.
LDR
:04491nmm a2200445K 4500
001
2354210
005
20230324111228.5
006
m o d
007
cr mn ---uuuuu
008
241011s2021 xx obm 000 0 eng d
020
$a
9798834019756
035
$a
(MiAaPQ)AAI29310787
035
$a
(MiAaPQ)OhioLINKosu1638536531293253
035
$a
AAI29310787
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
$d
NTU
100
1
$a
Zhu, Zhaoxuan.
$3
3694556
245
1 0
$a
Reinforcement Learning in Eco-Driving for Connected and Automated Vehicles.
264
0
$c
2021
300
$a
1 online resource (155 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Dissertations Abstracts International, Volume: 84-01, Section: B.
500
$a
Advisor: Canova, Marcello.
502
$a
Thesis (Ph.D.)--The Ohio State University, 2021.
504
$a
Includes bibliographical references
520
$a
Connected and Automated Vehicles (CAVs) can significantly improve transportation efficiency by taking advantage of advanced connectivity technologies. Meanwhile, the combination of CAVs and powertrain electrification, such as Hybrid Electric Vehicles (HEVs) and Plug-in Hybrid Electric Vehicles (PHEVs), offers greater potential to improve fuel economy due to the extra control flexibility compared to vehicles with a single power source. In this context, the eco-driving control optimization problem seeks to design the optimal speed and powertrain components usage profiles based upon the information received by advanced mapping or Vehicle-to-Everything (V2X) communications to minimize the energy consumed by the vehicle over a given itinerary. To overcome the real-time computational complexity and embrace the stochastic nature of the driving task, the application and extension of state-of-the-art (SOTA) Deep Reinforcement Learning (Deep RL, DRL) algorithms to the eco-driving problem for a mild-HEV is studied in this dissertation.For better training and a more comprehensive evaluation, an RL environment, consisting of a mild HEV powertrain and vehicle dynamics model and a large-scale microscopic traffic simulator, is developed. To benchmark the performance of the developed strategies, two causal controllers, namely a baseline strategy representing human drivers and a deterministic optimal-control-based strategy, and the non-causal wait-and-see solution are implemented.In the first RL application, the eco-driving problem is formulated as a Partially Observable Markov Decision Process, and a SOTA model-free DRL (MFDRL) algorithm, Proximal Policy Optimization with Long Short-term Memory as function approximator, is used. Evaluated over 100 trips randomly generated in the city of Columbus, OH, the MFDRL agent shows a 17% fuel economy improvement against the baseline strategy while keeping the average travel time comparable. While showing performance comparable to the optimal-control-based strategy, the actor of the MFDRL agent offers an explicit control policy that significantly reduces the onboard computation.Subsequently, a model-based DRL (MBDRL) algorithm, Safe Model-based Off-policy Reinforcement Learning (SMORL) is proposed. The algorithm addresses the following issues emerged from the MFDRL development: a) the cumbersome process necessary to design the rewarding mechanism, b) the lack of the constraint satisfaction and feasibility guarantee and c) the low sample efficiency. Specifically, SMORL consists of three key components, a massively parallelizable dynamic programming trajectory optimizer, a value function learned in an off-policy fashion and a learned safe set as a generative model. Evaluated under the same conditions, the SMORL agent shows a 21% reduction on the fuel consumption over the baseline and the dominant performance over the MFDRL agent and the deterministic optimal-control-based controller.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2023
538
$a
Mode of access: World Wide Web
650
4
$a
Automotive engineering.
$3
2181195
650
4
$a
Mechanical engineering.
$3
649730
650
4
$a
Energy.
$3
876794
650
4
$a
Computer science.
$3
523869
650
4
$a
Robotics.
$3
519753
653
$a
Connected and Automated Vehicles
653
$a
Reinforcement Learning
653
$a
Generative models
653
$a
Energy efficiency
653
$a
Dynamic programming
653
$a
Parallel computing
655
7
$a
Electronic books.
$2
lcsh
$3
542853
690
$a
0540
690
$a
0548
690
$a
0984
690
$a
0771
690
$a
0791
710
2
$a
ProQuest Information and Learning Co.
$3
783688
710
2
$a
The Ohio State University.
$b
Mechanical Engineering.
$3
1684523
773
0
$t
Dissertations Abstracts International
$g
84-01B.
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=29310787
$z
click for full text (PQDT)
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9476566
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入