Language:
English
繁體中文
Help
回圖書館首頁
手機版館藏查詢
Login
Back
Switch To:
Labeled
|
MARC Mode
|
ISBD
Model based approximation methods fo...
~
Wang, Xin.
Linked to FindBook
Google Book
Amazon
博客來
Model based approximation methods for reinforcement learning .
Record Type:
Electronic resources : Monograph/item
Title/Author:
Model based approximation methods for reinforcement learning ./
Author:
Wang, Xin.
Description:
234 p.
Notes:
Source: Dissertation Abstracts International, Volume: 68-02, Section: B, page: 1094.
Contained By:
Dissertation Abstracts International68-02B.
Subject:
Artificial Intelligence. -
Online resource:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3251980
Model based approximation methods for reinforcement learning .
Wang, Xin.
Model based approximation methods for reinforcement learning .
- 234 p.
Source: Dissertation Abstracts International, Volume: 68-02, Section: B, page: 1094.
Thesis (Ph.D.)--Oregon State University, 2006.
The thesis focuses on model-based approximation methods for reinforcement learning with large scale applications such as combinatorial optimization problems.Subjects--Topical Terms:
769149
Artificial Intelligence.
Model based approximation methods for reinforcement learning .
LDR
:03014nmm 2200313 4500
001
1834498
005
20071119145705.5
008
130610s2006 eng d
035
$a
(UMI)AAI3251980
035
$a
AAI3251980
040
$a
UMI
$c
UMI
100
1
$a
Wang, Xin.
$3
1245072
245
1 0
$a
Model based approximation methods for reinforcement learning .
300
$a
234 p.
500
$a
Source: Dissertation Abstracts International, Volume: 68-02, Section: B, page: 1094.
500
$a
Adviser: Thomas G. Dietterich.
502
$a
Thesis (Ph.D.)--Oregon State University, 2006.
520
$a
The thesis focuses on model-based approximation methods for reinforcement learning with large scale applications such as combinatorial optimization problems.
520
$a
First, the thesis proposes two new model-based methods to stabilize the value-function approximation for reinforcement learning. The first one is the BFBP algorithm, a batch-like reinforcement learning process which iterates between the exploration and exploitation stages of the learning process. For the exploitation part, this thesis investigates the plausibility and performance of using more efficient offline algorithms such as linear regression, regression trees, and SVMs for value-function approximators. The thesis discovers that with systematic local search methods such as Limited Discrepancy Search and a good initial heuristic, the algorithm often coverges faster and to a better level of performance, compared with epsilon greedy exploration methods.
520
$a
The second method combines linear programming with the kernel trick to find value-function approximators for reinforcement learning. One formulation is based on SVM regression; the second is based on the Bellman equation; and the third seeks only to ensure that good moves have an advantage over bad moves. All formulations attempt to minimize the number of support vectors while fitting the data. The advantage of the kernel methods is that they can easily adjust the complexity of the function approximator to fit the complexity of the value function.
520
$a
The thesis also proposes a model-based policy gradient reinforcement learning algorithm. In our approach, we learn the models P( s'|s, a) and R(s'| s, a), and then use dynamic programming to compute the value of the policy directly from the model. Unlike online sampling-based policy gradient algorithms, it does not suffer from high variances, and it also converges faster.
520
$a
In summary, the thesis purposed model-based approximation algorithms for both value function based and policy gradient reinforcement learning, with promising application results on multiple problem domains and job-shop scheduling benchmarks.
590
$a
School code: 0172.
650
4
$a
Artificial Intelligence.
$3
769149
650
4
$a
Computer Science.
$3
626642
690
$a
0800
690
$a
0984
710
2 0
$a
Oregon State University.
$3
625720
773
0
$t
Dissertation Abstracts International
$g
68-02B.
790
1 0
$a
Dietterich, Thomas G.,
$e
advisor
790
$a
0172
791
$a
Ph.D.
792
$a
2006
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3251980
based on 0 review(s)
Location:
ALL
電子資源
Year:
Volume Number:
Items
1 records • Pages 1 •
1
Inventory Number
Location Name
Item Class
Material type
Call number
Usage Class
Loan Status
No. of reservations
Opac note
Attachments
W9225518
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
On shelf
0
1 records • Pages 1 •
1
Multimedia
Reviews
Add a review
and share your thoughts with other readers
Export
pickup library
Processing
...
Change password
Login