語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Learning Neural Representations that...
~
Stachenfeld, Kimberly.
FindBook
Google Book
Amazon
博客來
Learning Neural Representations that Support Efficient Reinforcement Learning.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Learning Neural Representations that Support Efficient Reinforcement Learning./
作者:
Stachenfeld, Kimberly.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2018,
面頁冊數:
155 p.
附註:
Source: Dissertation Abstracts International, Volume: 79-10(E), Section: B.
Contained By:
Dissertation Abstracts International79-10B(E).
標題:
Neurosciences. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10824319
ISBN:
9780438050419
Learning Neural Representations that Support Efficient Reinforcement Learning.
Stachenfeld, Kimberly.
Learning Neural Representations that Support Efficient Reinforcement Learning.
- Ann Arbor : ProQuest Dissertations & Theses, 2018 - 155 p.
Source: Dissertation Abstracts International, Volume: 79-10(E), Section: B.
Thesis (Ph.D.)--Princeton University, 2018.
RL has been transformative for neuroscience by providing a normative anchor for interpreting neural and behavioral data. End-to-end RL methods have scored impressive victories with minimal compromises in autonomy, hand-engineering, and generality. The cost of this minimalism in practice is that model-free RL methods are slow to learn and generalize poorly. Humans and animals exhibit substantially improved flexibility and generalize learned information rapidly to new environment by learning invariants of the environment and features of the environment that support fast learning rapid transfer in new environments. An important question for both neuroscience and machine learning is what kind of ``representational objectives'' encourage humans and other animals to encode structure about the world. This can be formalized as ``representation feature learning,'' in which the animal or agent learns to form representations with information potentially relevant to the downstream RL process. We will overview different representational objectives that have received attention in neuroscience and in machine learning. The focus of this overview will be to first highlight conditions under which these seemingly unrelated objectives are actually mathematically equivalent. We will use this to motivate a breakdown of properties of different learned representations that are meaningfully different and can be used to inform contrasting hypotheses for neuroscience. We then use this perspective to motivate our model of the hippocampus. A cognitive map has long been the dominant metaphor for hippocampal function, embracing the idea that place cells encode a geometric representation of space. However, evidence for predictive coding, reward sensitivity, and policy dependence in place cells suggests that the representation is not purely spatial. We approach the problem of understanding hippocampal representations from a reinforcement learning perspective, focusing on what kind of spatial representation is most useful for maximizing future reward. We show that the answer takes the form of a predictive representation. This representation captures many aspects of place cell responses that fall outside the traditional view of a cognitive map. We go on to argue that entorhinal grid cells encode a low-dimensional basis set for the predictive representation, useful for suppressing noise in predictions and extracting multiscale structure for hierarchical planning.
ISBN: 9780438050419Subjects--Topical Terms:
588700
Neurosciences.
Learning Neural Representations that Support Efficient Reinforcement Learning.
LDR
:03447nmm a2200313 4500
001
2164072
005
20181026115419.5
008
190424s2018 ||||||||||||||||| ||eng d
020
$a
9780438050419
035
$a
(MiAaPQ)AAI10824319
035
$a
(MiAaPQ)princeton:12624
035
$a
AAI10824319
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Stachenfeld, Kimberly.
$3
3352106
245
1 0
$a
Learning Neural Representations that Support Efficient Reinforcement Learning.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2018
300
$a
155 p.
500
$a
Source: Dissertation Abstracts International, Volume: 79-10(E), Section: B.
500
$a
Adviser: Matthew M. Botvinick.
502
$a
Thesis (Ph.D.)--Princeton University, 2018.
520
$a
RL has been transformative for neuroscience by providing a normative anchor for interpreting neural and behavioral data. End-to-end RL methods have scored impressive victories with minimal compromises in autonomy, hand-engineering, and generality. The cost of this minimalism in practice is that model-free RL methods are slow to learn and generalize poorly. Humans and animals exhibit substantially improved flexibility and generalize learned information rapidly to new environment by learning invariants of the environment and features of the environment that support fast learning rapid transfer in new environments. An important question for both neuroscience and machine learning is what kind of ``representational objectives'' encourage humans and other animals to encode structure about the world. This can be formalized as ``representation feature learning,'' in which the animal or agent learns to form representations with information potentially relevant to the downstream RL process. We will overview different representational objectives that have received attention in neuroscience and in machine learning. The focus of this overview will be to first highlight conditions under which these seemingly unrelated objectives are actually mathematically equivalent. We will use this to motivate a breakdown of properties of different learned representations that are meaningfully different and can be used to inform contrasting hypotheses for neuroscience. We then use this perspective to motivate our model of the hippocampus. A cognitive map has long been the dominant metaphor for hippocampal function, embracing the idea that place cells encode a geometric representation of space. However, evidence for predictive coding, reward sensitivity, and policy dependence in place cells suggests that the representation is not purely spatial. We approach the problem of understanding hippocampal representations from a reinforcement learning perspective, focusing on what kind of spatial representation is most useful for maximizing future reward. We show that the answer takes the form of a predictive representation. This representation captures many aspects of place cell responses that fall outside the traditional view of a cognitive map. We go on to argue that entorhinal grid cells encode a low-dimensional basis set for the predictive representation, useful for suppressing noise in predictions and extracting multiscale structure for hierarchical planning.
590
$a
School code: 0181.
650
4
$a
Neurosciences.
$3
588700
650
4
$a
Quantitative psychology.
$3
2144748
650
4
$a
Cognitive psychology.
$3
523881
690
$a
0317
690
$a
0632
690
$a
0633
710
2
$a
Princeton University.
$b
Neuroscience.
$3
2099004
773
0
$t
Dissertation Abstracts International
$g
79-10B(E).
790
$a
0181
791
$a
Ph.D.
792
$a
2018
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10824319
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9363619
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入