Language:
English
繁體中文
Help
回圖書館首頁
手機版館藏查詢
Login
Back
Switch To:
Labeled
|
MARC Mode
|
ISBD
Interaction between modules in learn...
~
Sethi, Amit.
Linked to FindBook
Google Book
Amazon
博客來
Interaction between modules in learning systems for vision applications.
Record Type:
Electronic resources : Monograph/item
Title/Author:
Interaction between modules in learning systems for vision applications./
Author:
Sethi, Amit.
Description:
96 p.
Notes:
Source: Dissertation Abstracts International, Volume: 67-07, Section: B, page: 4012.
Contained By:
Dissertation Abstracts International67-07B.
Subject:
Engineering, Electronics and Electrical. -
Online resource:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3223715
ISBN:
9780542775611
Interaction between modules in learning systems for vision applications.
Sethi, Amit.
Interaction between modules in learning systems for vision applications.
- 96 p.
Source: Dissertation Abstracts International, Volume: 67-07, Section: B, page: 4012.
Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 2006.
Complex vision tasks such as event detection in a surveillance video can be divided into subtasks such as human detection, tracking, and trajectory analysis. The video can be thought of as being composed of various features. These features can be roughly arranged in a hierarchy from low level features to high-level features. Low-level features include edges and blobs, and high-level features include objects and events. Loosely, the low-level feature extraction is based on signal/image processing techniques, while the high-level feature extraction is based on machine learning techniques.
ISBN: 9780542775611Subjects--Topical Terms:
626636
Engineering, Electronics and Electrical.
Interaction between modules in learning systems for vision applications.
LDR
:03128nmm 2200301 4500
001
1833567
005
20071009090858.5
008
130610s2006 eng d
020
$a
9780542775611
035
$a
(UMI)AAI3223715
035
$a
AAI3223715
040
$a
UMI
$c
UMI
100
1
$a
Sethi, Amit.
$3
1262970
245
1 0
$a
Interaction between modules in learning systems for vision applications.
300
$a
96 p.
500
$a
Source: Dissertation Abstracts International, Volume: 67-07, Section: B, page: 4012.
500
$a
Adviser: Thomas S. Huang.
502
$a
Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 2006.
520
$a
Complex vision tasks such as event detection in a surveillance video can be divided into subtasks such as human detection, tracking, and trajectory analysis. The video can be thought of as being composed of various features. These features can be roughly arranged in a hierarchy from low level features to high-level features. Low-level features include edges and blobs, and high-level features include objects and events. Loosely, the low-level feature extraction is based on signal/image processing techniques, while the high-level feature extraction is based on machine learning techniques.
520
$a
Traditionally, vision systems extract features in a feedforward manner on the hierarchy; that is, certain modules extract low-level features and other modules make use of these low-level features to extract high-level features. Along with others in the research community we have worked on this design approach. We briefly present our work on object recognition and multiperson tracking systems designed with this approach and highlight its advantages and shortcomings. However, our focus is on system design methods that allow tight feedback between the layers of the feature hierarchy, as well as among the high-level modules themselves. We present previous research on systems with feedback and discuss the strengths and limitations of these approaches. This analysis allows us to develop a new framework for designing complex vision systems that allows tight feedback in a hierarchy of features and modules that extract these features using a graphical representation. This new framework is based on factor graphs. It relaxes some of the constraints of the traditional factor graphs and replaces its function nodes by modified versions of some of the modules that have been developed for specific vision tasks. These modules can be easily formulated by slightly modifying modules developed for specific tasks in other vision systems, if we can match the input and output variables to variables in our graphical structure. It also draws inspiration from product of experts and Free Energy view of the EM algorithm. We present experimental results and discuss the path for future development.
590
$a
School code: 0090.
650
4
$a
Engineering, Electronics and Electrical.
$3
626636
650
4
$a
Artificial Intelligence.
$3
769149
650
4
$a
Computer Science.
$3
626642
690
$a
0544
690
$a
0800
690
$a
0984
710
2 0
$a
University of Illinois at Urbana-Champaign.
$3
626646
773
0
$t
Dissertation Abstracts International
$g
67-07B.
790
1 0
$a
Huang, Thomas S.,
$e
advisor
790
$a
0090
791
$a
Ph.D.
792
$a
2006
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3223715
based on 0 review(s)
Location:
ALL
電子資源
Year:
Volume Number:
Items
1 records • Pages 1 •
1
Inventory Number
Location Name
Item Class
Material type
Call number
Usage Class
Loan Status
No. of reservations
Opac note
Attachments
W9224431
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
On shelf
0
1 records • Pages 1 •
1
Multimedia
Reviews
Add a review
and share your thoughts with other readers
Export
pickup library
Processing
...
Change password
Login