Language:
English
繁體中文
Help
回圖書館首頁
手機版館藏查詢
Login
Back
Switch To:
Labeled
|
MARC Mode
|
ISBD
Parts, objects and scenes: Computat...
~
Renninger, Laura Lynn Walker.
Linked to FindBook
Google Book
Amazon
博客來
Parts, objects and scenes: Computational models and psychophysics.
Record Type:
Electronic resources : Monograph/item
Title/Author:
Parts, objects and scenes: Computational models and psychophysics./
Author:
Renninger, Laura Lynn Walker.
Description:
103 p.
Notes:
Source: Dissertation Abstracts International, Volume: 65-02, Section: B, page: 1047.
Contained By:
Dissertation Abstracts International65-02B.
Subject:
Psychology, Cognitive. -
Online resource:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3121662
ISBN:
0496690132
Parts, objects and scenes: Computational models and psychophysics.
Renninger, Laura Lynn Walker.
Parts, objects and scenes: Computational models and psychophysics.
- 103 p.
Source: Dissertation Abstracts International, Volume: 65-02, Section: B, page: 1047.
Thesis (Ph.D.)--University of California, Berkeley, 2003.
In this thesis, I develop computational models to account for several different visual phenomena: (i) perception of object parts, (ii) eye movements when learning new objects, (iii) perceived shape similarity, and (iv) rapid scene identification. The validity of each model is tested using human psychophysical techniques.
ISBN: 0496690132Subjects--Topical Terms:
1017810
Psychology, Cognitive.
Parts, objects and scenes: Computational models and psychophysics.
LDR
:04028nmm 2200337 4500
001
1844217
005
20051017073446.5
008
130614s2003 eng d
020
$a
0496690132
035
$a
(UnM)AAI3121662
035
$a
AAI3121662
040
$a
UnM
$c
UnM
100
1
$a
Renninger, Laura Lynn Walker.
$3
1932414
245
1 0
$a
Parts, objects and scenes: Computational models and psychophysics.
300
$a
103 p.
500
$a
Source: Dissertation Abstracts International, Volume: 65-02, Section: B, page: 1047.
500
$a
Chair: Jitendra Malik.
502
$a
Thesis (Ph.D.)--University of California, Berkeley, 2003.
520
$a
In this thesis, I develop computational models to account for several different visual phenomena: (i) perception of object parts, (ii) eye movements when learning new objects, (iii) perceived shape similarity, and (iv) rapid scene identification. The validity of each model is tested using human psychophysical techniques.
520
$a
Beginning with parts, I note an ecological fact: parts of objects tend to be convex. Several elaborate rules have been proposed to account for our perception of parts, but they can all be understood as an attempt to exploit convexity. I propose a model that finds convex subregions within the bounding contour of the object. The segmentations produced by the model are quantitatively compared with a large-scale data set of human object segmentations using a precision-recall framework. The model produces results within the error range of the subject to subject variability, demonstrating that a simple convexity rule can account for human perception of parts.
520
$a
It has been suggested that we use parts to encode and retrieve object memories. By studying eye movements as observers learn new objects, one can investigate the strategies that they adopt, and what information is useful for encoding object memories. I argue that, in fact, observers employ a strategy of sequential information maximization to reduce uncertainty about the orientations of the object contour. I collect eye movement data as subjects learn novel object silhouettes, and compare it to the fixations of a biologically motivated dynamic model. Model fixations are drawn away from predictable (straight) contours and toward angles near points of high curvature, similar to observed human behavior. The model collects information too efficiently, however. By adjusting the parameters until we match human performance, we can probe the limits of human sensitivity.
520
$a
Objects may be recognized over successive fixations of their parts, or by matching their overall shapes, as implicitly suggested by the eye movement model. Matching objects would be straightforward if we had a perceptual shape metric that could capture the perceived similarity between two shapes. I examine two measures from the statistics and mathematics literature and ask whether they can represent human just-noticeable-differences in shape space. I find that shape discrimination thresholds are stable when measured with these metrics, for both systematic and random shape changes. This suggests that the metrics can be useful for gauging perceptual similarity of two closely related shapes.
520
$a
Finally, I consider the phenomenon of rapid scene identification. Subjects are able to get the gist of a scene within a single fixation. I propose a model that learns to categorize scenes based only on the responses they evoke in V1-like filters. The model performs above chance, demonstrating that categorization could begin as early as V1. When compared with human performance on a scene identification task, the model performs much like observers who had between 37 and 50 ms exposure to the image.
590
$a
School code: 0028.
650
4
$a
Psychology, Cognitive.
$3
1017810
650
4
$a
Computer Science.
$3
626642
650
4
$a
Psychology, Physiological.
$3
1017869
690
$a
0633
690
$a
0984
690
$a
0989
710
2 0
$a
University of California, Berkeley.
$3
687832
773
0
$t
Dissertation Abstracts International
$g
65-02B.
790
1 0
$a
Malik, Jitendra,
$e
advisor
790
$a
0028
791
$a
Ph.D.
792
$a
2003
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3121662
based on 0 review(s)
Location:
ALL
電子資源
Year:
Volume Number:
Items
1 records • Pages 1 •
1
Inventory Number
Location Name
Item Class
Material type
Call number
Usage Class
Loan Status
No. of reservations
Opac note
Attachments
W9193731
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
On shelf
0
1 records • Pages 1 •
1
Multimedia
Reviews
Add a review
and share your thoughts with other readers
Export
pickup library
Processing
...
Change password
Login