Language:
English
繁體中文
Help
回圖書館首頁
手機版館藏查詢
Login
Back
Switch To:
Labeled
|
MARC Mode
|
ISBD
Reinforcement learning in high-diame...
~
Provost, Jefferson.
Linked to FindBook
Google Book
Amazon
博客來
Reinforcement learning in high-diameter, continuous environments.
Record Type:
Language materials, printed : Monograph/item
Title/Author:
Reinforcement learning in high-diameter, continuous environments./
Author:
Provost, Jefferson.
Description:
153 p.
Notes:
Advisers: Benjamin Kuipers; Risto Miikkulainen.
Contained By:
Dissertation Abstracts International68-09B.
Subject:
Artificial Intelligence. -
Online resource:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3277622
ISBN:
9780549171850
Reinforcement learning in high-diameter, continuous environments.
Provost, Jefferson.
Reinforcement learning in high-diameter, continuous environments.
- 153 p.
Advisers: Benjamin Kuipers; Risto Miikkulainen.
Thesis (Ph.D.)--The University of Texas at Austin, 2007.
Many important real-world robotic tasks have high diameter, that is, their solution requires a large number of primitive actions by the robot. For example, they may require navigating to distant locations using primitive motor control commands. In addition, modern robots are endowed with rich, high-dimensional sensory systems, providing measurements of a continuous environment. Reinforcement learning (RL) has shown promise as a method for automatic learning of robot behavior, but current methods work best on low-diameter, low-dimensional tasks. Because of this problem, the success of RL on real-world tasks still depends on human analysis of the robot, environment, and task to provide a useful set of perceptual features and an appropriate decomposition of the task into subtasks.
ISBN: 9780549171850Subjects--Topical Terms:
769149
Artificial Intelligence.
Reinforcement learning in high-diameter, continuous environments.
LDR
:02962nam 2200325 a 45
001
958935
005
20110704
008
110704s2007 ||||||||||||||||| ||eng d
020
$a
9780549171850
035
$a
(UMI)AAI3277622
035
$a
AAI3277622
040
$a
UMI
$c
UMI
100
1
$a
Provost, Jefferson.
$3
1282401
245
1 0
$a
Reinforcement learning in high-diameter, continuous environments.
300
$a
153 p.
500
$a
Advisers: Benjamin Kuipers; Risto Miikkulainen.
500
$a
Source: Dissertation Abstracts International, Volume: 68-09, Section: B, page: 6082.
502
$a
Thesis (Ph.D.)--The University of Texas at Austin, 2007.
520
$a
Many important real-world robotic tasks have high diameter, that is, their solution requires a large number of primitive actions by the robot. For example, they may require navigating to distant locations using primitive motor control commands. In addition, modern robots are endowed with rich, high-dimensional sensory systems, providing measurements of a continuous environment. Reinforcement learning (RL) has shown promise as a method for automatic learning of robot behavior, but current methods work best on low-diameter, low-dimensional tasks. Because of this problem, the success of RL on real-world tasks still depends on human analysis of the robot, environment, and task to provide a useful set of perceptual features and an appropriate decomposition of the task into subtasks.
520
$a
This thesis presents Self-Organizing Distinctive-state Abstraction (SODA) as a solution to this problem. Using SODA a robot with little prior knowledge of its sensorimotor system, environment, and task can automatically reduce the effective diameter of its tasks. First it uses a self-organizing feature map to learn higher level perceptual features while exploring using primitive, local actions. Then, using the learned features as input, it learns a set of high-level actions that carry the robot between perceptually distinctive states in the environment.
520
$a
Experiments in two robot navigation environments demonstrate that SODA learns useful features and high-level actions, that using these new actions dramatically speeds up learning for high-diameter navigation tasks, and that the method scales to large (building-sized) robot environments. These experiments demonstrate SODAs effectiveness as a generic learning agent for mobile robot navigation, pointing the way toward developmental robots that learn to understand themselves and their environments through experience in the world, reducing the need for human engineering for each new robotic application.
590
$a
School code: 0227.
650
4
$a
Artificial Intelligence.
$3
769149
650
4
$a
Computer Science.
$3
626642
650
4
$a
Engineering, Robotics.
$3
1018454
690
$a
0771
690
$a
0800
690
$a
0984
710
2
$a
The University of Texas at Austin.
$b
Computer Sciences.
$3
1037904
773
0
$t
Dissertation Abstracts International
$g
68-09B.
790
$a
0227
790
1 0
$a
Kuipers, Benjamin,
$e
advisor
790
1 0
$a
Miikkulainen, Risto,
$e
advisor
791
$a
Ph.D.
792
$a
2007
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3277622
based on 0 review(s)
Location:
ALL
電子資源
Year:
Volume Number:
Items
1 records • Pages 1 •
1
Inventory Number
Location Name
Item Class
Material type
Call number
Usage Class
Loan Status
No. of reservations
Opac note
Attachments
W9122400
電子資源
11.線上閱覽_V
電子書
EB W9122400
一般使用(Normal)
On shelf
0
1 records • Pages 1 •
1
Multimedia
Reviews
Add a review
and share your thoughts with other readers
Export
pickup library
Processing
...
Change password
Login