Language:
English
繁體中文
Help
回圖書館首頁
手機版館藏查詢
Login
Back
Switch To:
Labeled
|
MARC Mode
|
ISBD
Linked to FindBook
Google Book
Amazon
博客來
A Deep Reinforcement Learning Approach for Robotic Bicycle Stabilization.
Record Type:
Electronic resources : Monograph/item
Title/Author:
A Deep Reinforcement Learning Approach for Robotic Bicycle Stabilization./
Author:
Turakhia, Shubham.
Published:
Ann Arbor : ProQuest Dissertations & Theses, : 2020,
Description:
68 p.
Notes:
Source: Masters Abstracts International, Volume: 82-07.
Contained By:
Masters Abstracts International82-07.
Subject:
Mechanical engineering. -
Online resource:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28157038
ISBN:
9798557028929
A Deep Reinforcement Learning Approach for Robotic Bicycle Stabilization.
Turakhia, Shubham.
A Deep Reinforcement Learning Approach for Robotic Bicycle Stabilization.
- Ann Arbor : ProQuest Dissertations & Theses, 2020 - 68 p.
Source: Masters Abstracts International, Volume: 82-07.
Thesis (M.S.)--Arizona State University, 2020.
This item must not be sold to any third party vendors.
Bicycle stabilization has become a popular topic because of its complex dynamic behavior and the large body of bicycle modeling research. Riding a bicycle requires accurately performing several tasks, such as balancing and navigation which may be difficult for disabled people. Their problems could be partially reduced by providing steering assistance. For stabilization of these highly maneuverable and efficient machines, many control techniques have been applied - achieving interesting results, but with some limitations which includes strict environmental requirements. This thesis expands on the work of Randlov and Alstrom, using reinforcement learning for bicycle self-stabilization with robotic steering. This thesis applies the deep deterministic policy gradient algorithm, which can handle continuous action spaces which is not possible for Q-learning technique. The research involved algorithm training on virtual environments followed by simulations to assess its results. Furthermore, hardware testing was also conducted on Arizona State University's RISE lab Smart bicycle platform for testing its self-balancing performance. Detailed analysis of the bicycle trial runs are presented. Validation of testing was done by plotting the real-time states and actions collected during the outdoor testing which included the roll angle of bicycle. Further improvements in regard to model training and hardware testing are also presented.
ISBN: 9798557028929Subjects--Topical Terms:
649730
Mechanical engineering.
Subjects--Index Terms:
Bicycle
A Deep Reinforcement Learning Approach for Robotic Bicycle Stabilization.
LDR
:02591nmm a2200385 4500
001
2347574
005
20220823142302.5
008
241004s2020 ||||||||||||||||| ||eng d
020
$a
9798557028929
035
$a
(MiAaPQ)AAI28157038
035
$a
AAI28157038
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Turakhia, Shubham.
$3
3686842
245
1 0
$a
A Deep Reinforcement Learning Approach for Robotic Bicycle Stabilization.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2020
300
$a
68 p.
500
$a
Source: Masters Abstracts International, Volume: 82-07.
500
$a
Advisor: Zhang, Wenlong.
502
$a
Thesis (M.S.)--Arizona State University, 2020.
506
$a
This item must not be sold to any third party vendors.
520
$a
Bicycle stabilization has become a popular topic because of its complex dynamic behavior and the large body of bicycle modeling research. Riding a bicycle requires accurately performing several tasks, such as balancing and navigation which may be difficult for disabled people. Their problems could be partially reduced by providing steering assistance. For stabilization of these highly maneuverable and efficient machines, many control techniques have been applied - achieving interesting results, but with some limitations which includes strict environmental requirements. This thesis expands on the work of Randlov and Alstrom, using reinforcement learning for bicycle self-stabilization with robotic steering. This thesis applies the deep deterministic policy gradient algorithm, which can handle continuous action spaces which is not possible for Q-learning technique. The research involved algorithm training on virtual environments followed by simulations to assess its results. Furthermore, hardware testing was also conducted on Arizona State University's RISE lab Smart bicycle platform for testing its self-balancing performance. Detailed analysis of the bicycle trial runs are presented. Validation of testing was done by plotting the real-time states and actions collected during the outdoor testing which included the roll angle of bicycle. Further improvements in regard to model training and hardware testing are also presented.
590
$a
School code: 0010.
650
4
$a
Mechanical engineering.
$3
649730
650
4
$a
Computer engineering.
$3
621879
650
4
$a
Automotive engineering.
$3
2181195
650
4
$a
Robotics.
$3
519753
653
$a
Bicycle
653
$a
Hardware testing
653
$a
Reinforcement Learning
653
$a
Self balancing
653
$a
Stabilization
690
$a
0548
690
$a
0464
690
$a
0540
690
$a
0771
710
2
$a
Arizona State University.
$b
Mechanical Engineering.
$3
1675155
773
0
$t
Masters Abstracts International
$g
82-07.
790
$a
0010
791
$a
M.S.
792
$a
2020
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28157038
based on 0 review(s)
Location:
ALL
電子資源
Year:
Volume Number:
Items
1 records • Pages 1 •
1
Inventory Number
Location Name
Item Class
Material type
Call number
Usage Class
Loan Status
No. of reservations
Opac note
Attachments
W9470012
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
On shelf
0
1 records • Pages 1 •
1
Multimedia
Reviews
Add a review
and share your thoughts with other readers
Export
pickup library
Processing
...
Change password
Login