Language:
English
繁體中文
Help
回圖書館首頁
手機版館藏查詢
Login
Back
Switch To:
Labeled
|
MARC Mode
|
ISBD
Linked to FindBook
Google Book
Amazon
博客來
Reinforcement Learning in Eco-Driving for Connected and Automated Vehicles.
Record Type:
Electronic resources : Monograph/item
Title/Author:
Reinforcement Learning in Eco-Driving for Connected and Automated Vehicles./
Author:
Zhu, Zhaoxuan.
Description:
1 online resource (155 pages)
Notes:
Source: Dissertations Abstracts International, Volume: 84-01, Section: B.
Contained By:
Dissertations Abstracts International84-01B.
Subject:
Automotive engineering. -
Online resource:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=29310787click for full text (PQDT)
ISBN:
9798834019756
Reinforcement Learning in Eco-Driving for Connected and Automated Vehicles.
Zhu, Zhaoxuan.
Reinforcement Learning in Eco-Driving for Connected and Automated Vehicles.
- 1 online resource (155 pages)
Source: Dissertations Abstracts International, Volume: 84-01, Section: B.
Thesis (Ph.D.)--The Ohio State University, 2021.
Includes bibliographical references
Connected and Automated Vehicles (CAVs) can significantly improve transportation efficiency by taking advantage of advanced connectivity technologies. Meanwhile, the combination of CAVs and powertrain electrification, such as Hybrid Electric Vehicles (HEVs) and Plug-in Hybrid Electric Vehicles (PHEVs), offers greater potential to improve fuel economy due to the extra control flexibility compared to vehicles with a single power source. In this context, the eco-driving control optimization problem seeks to design the optimal speed and powertrain components usage profiles based upon the information received by advanced mapping or Vehicle-to-Everything (V2X) communications to minimize the energy consumed by the vehicle over a given itinerary. To overcome the real-time computational complexity and embrace the stochastic nature of the driving task, the application and extension of state-of-the-art (SOTA) Deep Reinforcement Learning (Deep RL, DRL) algorithms to the eco-driving problem for a mild-HEV is studied in this dissertation.For better training and a more comprehensive evaluation, an RL environment, consisting of a mild HEV powertrain and vehicle dynamics model and a large-scale microscopic traffic simulator, is developed. To benchmark the performance of the developed strategies, two causal controllers, namely a baseline strategy representing human drivers and a deterministic optimal-control-based strategy, and the non-causal wait-and-see solution are implemented.In the first RL application, the eco-driving problem is formulated as a Partially Observable Markov Decision Process, and a SOTA model-free DRL (MFDRL) algorithm, Proximal Policy Optimization with Long Short-term Memory as function approximator, is used. Evaluated over 100 trips randomly generated in the city of Columbus, OH, the MFDRL agent shows a 17% fuel economy improvement against the baseline strategy while keeping the average travel time comparable. While showing performance comparable to the optimal-control-based strategy, the actor of the MFDRL agent offers an explicit control policy that significantly reduces the onboard computation.Subsequently, a model-based DRL (MBDRL) algorithm, Safe Model-based Off-policy Reinforcement Learning (SMORL) is proposed. The algorithm addresses the following issues emerged from the MFDRL development: a) the cumbersome process necessary to design the rewarding mechanism, b) the lack of the constraint satisfaction and feasibility guarantee and c) the low sample efficiency. Specifically, SMORL consists of three key components, a massively parallelizable dynamic programming trajectory optimizer, a value function learned in an off-policy fashion and a learned safe set as a generative model. Evaluated under the same conditions, the SMORL agent shows a 21% reduction on the fuel consumption over the baseline and the dominant performance over the MFDRL agent and the deterministic optimal-control-based controller.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2023
Mode of access: World Wide Web
ISBN: 9798834019756Subjects--Topical Terms:
2181195
Automotive engineering.
Subjects--Index Terms:
Connected and Automated VehiclesIndex Terms--Genre/Form:
542853
Electronic books.
Reinforcement Learning in Eco-Driving for Connected and Automated Vehicles.
LDR
:04491nmm a2200445K 4500
001
2354210
005
20230324111228.5
006
m o d
007
cr mn ---uuuuu
008
241011s2021 xx obm 000 0 eng d
020
$a
9798834019756
035
$a
(MiAaPQ)AAI29310787
035
$a
(MiAaPQ)OhioLINKosu1638536531293253
035
$a
AAI29310787
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
$d
NTU
100
1
$a
Zhu, Zhaoxuan.
$3
3694556
245
1 0
$a
Reinforcement Learning in Eco-Driving for Connected and Automated Vehicles.
264
0
$c
2021
300
$a
1 online resource (155 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Dissertations Abstracts International, Volume: 84-01, Section: B.
500
$a
Advisor: Canova, Marcello.
502
$a
Thesis (Ph.D.)--The Ohio State University, 2021.
504
$a
Includes bibliographical references
520
$a
Connected and Automated Vehicles (CAVs) can significantly improve transportation efficiency by taking advantage of advanced connectivity technologies. Meanwhile, the combination of CAVs and powertrain electrification, such as Hybrid Electric Vehicles (HEVs) and Plug-in Hybrid Electric Vehicles (PHEVs), offers greater potential to improve fuel economy due to the extra control flexibility compared to vehicles with a single power source. In this context, the eco-driving control optimization problem seeks to design the optimal speed and powertrain components usage profiles based upon the information received by advanced mapping or Vehicle-to-Everything (V2X) communications to minimize the energy consumed by the vehicle over a given itinerary. To overcome the real-time computational complexity and embrace the stochastic nature of the driving task, the application and extension of state-of-the-art (SOTA) Deep Reinforcement Learning (Deep RL, DRL) algorithms to the eco-driving problem for a mild-HEV is studied in this dissertation.For better training and a more comprehensive evaluation, an RL environment, consisting of a mild HEV powertrain and vehicle dynamics model and a large-scale microscopic traffic simulator, is developed. To benchmark the performance of the developed strategies, two causal controllers, namely a baseline strategy representing human drivers and a deterministic optimal-control-based strategy, and the non-causal wait-and-see solution are implemented.In the first RL application, the eco-driving problem is formulated as a Partially Observable Markov Decision Process, and a SOTA model-free DRL (MFDRL) algorithm, Proximal Policy Optimization with Long Short-term Memory as function approximator, is used. Evaluated over 100 trips randomly generated in the city of Columbus, OH, the MFDRL agent shows a 17% fuel economy improvement against the baseline strategy while keeping the average travel time comparable. While showing performance comparable to the optimal-control-based strategy, the actor of the MFDRL agent offers an explicit control policy that significantly reduces the onboard computation.Subsequently, a model-based DRL (MBDRL) algorithm, Safe Model-based Off-policy Reinforcement Learning (SMORL) is proposed. The algorithm addresses the following issues emerged from the MFDRL development: a) the cumbersome process necessary to design the rewarding mechanism, b) the lack of the constraint satisfaction and feasibility guarantee and c) the low sample efficiency. Specifically, SMORL consists of three key components, a massively parallelizable dynamic programming trajectory optimizer, a value function learned in an off-policy fashion and a learned safe set as a generative model. Evaluated under the same conditions, the SMORL agent shows a 21% reduction on the fuel consumption over the baseline and the dominant performance over the MFDRL agent and the deterministic optimal-control-based controller.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2023
538
$a
Mode of access: World Wide Web
650
4
$a
Automotive engineering.
$3
2181195
650
4
$a
Mechanical engineering.
$3
649730
650
4
$a
Energy.
$3
876794
650
4
$a
Computer science.
$3
523869
650
4
$a
Robotics.
$3
519753
653
$a
Connected and Automated Vehicles
653
$a
Reinforcement Learning
653
$a
Generative models
653
$a
Energy efficiency
653
$a
Dynamic programming
653
$a
Parallel computing
655
7
$a
Electronic books.
$2
lcsh
$3
542853
690
$a
0540
690
$a
0548
690
$a
0984
690
$a
0771
690
$a
0791
710
2
$a
ProQuest Information and Learning Co.
$3
783688
710
2
$a
The Ohio State University.
$b
Mechanical Engineering.
$3
1684523
773
0
$t
Dissertations Abstracts International
$g
84-01B.
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=29310787
$z
click for full text (PQDT)
based on 0 review(s)
Location:
ALL
電子資源
Year:
Volume Number:
Items
1 records • Pages 1 •
1
Inventory Number
Location Name
Item Class
Material type
Call number
Usage Class
Loan Status
No. of reservations
Opac note
Attachments
W9476566
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
On shelf
0
1 records • Pages 1 •
1
Multimedia
Reviews
Add a review
and share your thoughts with other readers
Export
pickup library
Processing
...
Change password
Login