Language:
English
繁體中文
Help
回圖書館首頁
手機版館藏查詢
Login
Back
Switch To:
Labeled
|
MARC Mode
|
ISBD
Exploiting space-time statistics of ...
~
Dedeoglu, Goksel.
Linked to FindBook
Google Book
Amazon
博客來
Exploiting space-time statistics of videos for face "Hallucination".
Record Type:
Language materials, printed : Monograph/item
Title/Author:
Exploiting space-time statistics of videos for face "Hallucination"./
Author:
Dedeoglu, Goksel.
Description:
126 p.
Notes:
Adviser: Takeo Kanade.
Contained By:
Dissertation Abstracts International68-06B.
Subject:
Artificial Intelligence. -
Online resource:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3270860
ISBN:
9780549086611
Exploiting space-time statistics of videos for face "Hallucination".
Dedeoglu, Goksel.
Exploiting space-time statistics of videos for face "Hallucination".
- 126 p.
Adviser: Takeo Kanade.
Thesis (Ph.D.)--Carnegie Mellon University, 2007.
Face "Hallucination" aims to recover high quality, high-resolution images of human faces from low-resolution, blurred, and degraded images or video. This thesis presents person-specific solutions to this problem through careful exploitation of space (image) and space-time (video) models. The results demonstrate accurate restoration of facial details, with resolution enhancements upto a scaling factor of 16.
ISBN: 9780549086611Subjects--Topical Terms:
769149
Artificial Intelligence.
Exploiting space-time statistics of videos for face "Hallucination".
LDR
:03301nam 2200313 a 45
001
958922
005
20110704
008
110704s2007 ||||||||||||||||| ||eng d
020
$a
9780549086611
035
$a
(UMI)AAI3270860
035
$a
AAI3270860
040
$a
UMI
$c
UMI
100
1
$a
Dedeoglu, Goksel.
$3
1282388
245
1 0
$a
Exploiting space-time statistics of videos for face "Hallucination".
300
$a
126 p.
500
$a
Adviser: Takeo Kanade.
500
$a
Source: Dissertation Abstracts International, Volume: 68-06, Section: B, page: 4108.
502
$a
Thesis (Ph.D.)--Carnegie Mellon University, 2007.
520
$a
Face "Hallucination" aims to recover high quality, high-resolution images of human faces from low-resolution, blurred, and degraded images or video. This thesis presents person-specific solutions to this problem through careful exploitation of space (image) and space-time (video) models. The results demonstrate accurate restoration of facial details, with resolution enhancements upto a scaling factor of 16.
520
$a
The algorithms proposed in this thesis follow the analysis-by-synthesis paradigm; they explain the observed (low-resolution) data by fitting a (high-resolution) model. In this context, the first contribution is the discovery of a scaling-induced bias that plagues most model-to-image (or image-to-image) fitting algorithms. It was found that models and observations should be treated asymmetrically , both to formulate an unbiased objective function and to derive an accurate optimization algorithm. This asymmetry is most relevant to Face Hallucination: when applied to the popular Active Appearance Model, it leads to a novel face tracking and reconstruction algorithm that is significantly more accurate than state-of-the-art methods. The analysis also reveals the inherent trade-off between computational efficiency and estimation accuracy in low-resolution regimes.
520
$a
The second contribution is a statistical generative model of face videos. By treating a video as a composition of space-time patches, this model efficiently encodes the temporal dynamics of complex visual phenomena such as eye-blinks and the occlusion or appearance of teeth. The same representation is also used to define a data-driven prior on a three-dimensional Markov Random Field in space and time. Experimental results demonstrate that temporal representation and reasoning about facial expressions improves robustness by regularizing the Face Hallucination problem.
520
$a
The final contribution is an approximate compensation scheme against illumination effects. It is observed that distinct illumination subspaces of a face (each coming from a different pose and expression) still exhibit similar variation with respect to illumination. This motivates augmenting the video model with a low-dimensional illumination subspace, whose parameters are estimated jointly with high-resolution face details. Successful Face Hallucinations beyond the lighting conditions of the training videos are reported.
590
$a
School code: 0041.
650
4
$a
Artificial Intelligence.
$3
769149
650
4
$a
Engineering, Robotics.
$3
1018454
690
$a
0771
690
$a
0800
710
2
$a
Carnegie Mellon University.
$3
1018096
773
0
$t
Dissertation Abstracts International
$g
68-06B.
790
$a
0041
790
1 0
$a
Kanade, Takeo,
$e
advisor
791
$a
Ph.D.
792
$a
2007
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3270860
based on 0 review(s)
Location:
ALL
電子資源
Year:
Volume Number:
Items
1 records • Pages 1 •
1
Inventory Number
Location Name
Item Class
Material type
Call number
Usage Class
Loan Status
No. of reservations
Opac note
Attachments
W9122387
電子資源
11.線上閱覽_V
電子書
EB W9122387
一般使用(Normal)
On shelf
0
1 records • Pages 1 •
1
Multimedia
Reviews
Add a review
and share your thoughts with other readers
Export
pickup library
Processing
...
Change password
Login