語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
到查詢結果
[ null ]
切換:
標籤
|
MARC模式
|
ISBD
FindBook
Google Book
Amazon
博客來
Learning Spatial Attentive and Reconfigurable Generative Adversarial Networks for Image Synthesis.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Learning Spatial Attentive and Reconfigurable Generative Adversarial Networks for Image Synthesis./
作者:
Sun, Wei.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2021,
面頁冊數:
137 p.
附註:
Source: Dissertations Abstracts International, Volume: 83-06, Section: B.
Contained By:
Dissertations Abstracts International83-06B.
標題:
Maps. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28484310
ISBN:
9798505573020
Learning Spatial Attentive and Reconfigurable Generative Adversarial Networks for Image Synthesis.
Sun, Wei.
Learning Spatial Attentive and Reconfigurable Generative Adversarial Networks for Image Synthesis.
- Ann Arbor : ProQuest Dissertations & Theses, 2021 - 137 p.
Source: Dissertations Abstracts International, Volume: 83-06, Section: B.
Thesis (Ph.D.)--North Carolina State University, 2021.
This item must not be sold to any third party vendors.
In this dissertation, we explore our recent works on spatial attention and reconfigurable GANs, aiming to develop more interpretable and spatial controllable GANs. In Chapter 2, we first reviewed the history of Generative Adversarial Networks, basic concepts and architecture design of GANs, and then introduce recent efforts and progress devoted to improve the efficiency and stability of GAN, and solving the challenges of GANs.In Chapter 3, we present a method of learning Attentive Atrous Convolution (AAC) which is a novel architectural unit and can be easily integrated into generators of GANs. The proposed AAC leverages advantages of capability of fusing multi-scale information by ASPP, capability of capturing relative importance between both spatial locations (especially multi- scale context) or feature channels by attention and the capability of preserving information and enhancing optimization feasibility by residual connections.In Chapter 4, we present a novel deep generative model to transfer an image of a person from a given pose to a new pose while keeping fashion item consistent. Unlike existing pose-guided image generation models, the proposed model take advantage of multi images from same person with convolution LSTM and is capable of synthesizing a photorealistic image of a person given a target pose. We have demonstrated our results by conducting rigorous experiments on two data sets, both quantitatively and qualitatively.n Chapter 5, we introduce our work on synthesizing realistic and sharp images from reconfigurable spatial layout (i.e., bounding boxes + class labels in an image lattice) and style (i.e., structural and appearance variations encoded by latent vectors). We present a layout- and style-based architecture for generative adversarial networks (termed LostGANs) that can be trained end-to-end to generate images from reconfigurable layout and style. The proposed LostGAN consists of two new components: (i) learning fine-grained mask maps in a weakly-supervised manner to bridge the gap between layouts and images, and (ii) learning object instance-specific layout-aware feature normalization (ISLA-Norm) in the generator to realize multi-object style generation.In Chapter 6, we explored deep consensus learning (DCL) for joint layout-to-image synthesis and weakly supervised image segmentation segmentation. The proposed DCL is formulated under a three-player minmax game setting, which is built on, and introduces an inference network in, conditional GANs, more specifically the LostGANs. The three networks are trained end-to-end using two deep consensus mappings, together with the adversary loss. The proposed DCL sheds light on addressing the problem of simultaneously learning structured input synthesis and weakly-supervised structured output prediction from scratch end-to-end.
ISBN: 9798505573020Subjects--Topical Terms:
544078
Maps.
Subjects--Index Terms:
Image synthesis
Learning Spatial Attentive and Reconfigurable Generative Adversarial Networks for Image Synthesis.
LDR
:04177nmm a2200421 4500
001
2352317
005
20221128103620.5
008
241004s2021 ||||||||||||||||| ||eng d
020
$a
9798505573020
035
$a
(MiAaPQ)AAI28484310
035
$a
(MiAaPQ)NCState_Univ18402038524
035
$a
AAI28484310
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Sun, Wei.
$3
1084496
245
1 0
$a
Learning Spatial Attentive and Reconfigurable Generative Adversarial Networks for Image Synthesis.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2021
300
$a
137 p.
500
$a
Source: Dissertations Abstracts International, Volume: 83-06, Section: B.
500
$a
Advisor: Wu, Tianfu.
502
$a
Thesis (Ph.D.)--North Carolina State University, 2021.
506
$a
This item must not be sold to any third party vendors.
520
$a
In this dissertation, we explore our recent works on spatial attention and reconfigurable GANs, aiming to develop more interpretable and spatial controllable GANs. In Chapter 2, we first reviewed the history of Generative Adversarial Networks, basic concepts and architecture design of GANs, and then introduce recent efforts and progress devoted to improve the efficiency and stability of GAN, and solving the challenges of GANs.In Chapter 3, we present a method of learning Attentive Atrous Convolution (AAC) which is a novel architectural unit and can be easily integrated into generators of GANs. The proposed AAC leverages advantages of capability of fusing multi-scale information by ASPP, capability of capturing relative importance between both spatial locations (especially multi- scale context) or feature channels by attention and the capability of preserving information and enhancing optimization feasibility by residual connections.In Chapter 4, we present a novel deep generative model to transfer an image of a person from a given pose to a new pose while keeping fashion item consistent. Unlike existing pose-guided image generation models, the proposed model take advantage of multi images from same person with convolution LSTM and is capable of synthesizing a photorealistic image of a person given a target pose. We have demonstrated our results by conducting rigorous experiments on two data sets, both quantitatively and qualitatively.n Chapter 5, we introduce our work on synthesizing realistic and sharp images from reconfigurable spatial layout (i.e., bounding boxes + class labels in an image lattice) and style (i.e., structural and appearance variations encoded by latent vectors). We present a layout- and style-based architecture for generative adversarial networks (termed LostGANs) that can be trained end-to-end to generate images from reconfigurable layout and style. The proposed LostGAN consists of two new components: (i) learning fine-grained mask maps in a weakly-supervised manner to bridge the gap between layouts and images, and (ii) learning object instance-specific layout-aware feature normalization (ISLA-Norm) in the generator to realize multi-object style generation.In Chapter 6, we explored deep consensus learning (DCL) for joint layout-to-image synthesis and weakly supervised image segmentation segmentation. The proposed DCL is formulated under a three-player minmax game setting, which is built on, and introduces an inference network in, conditional GANs, more specifically the LostGANs. The three networks are trained end-to-end using two deep consensus mappings, together with the adversary loss. The proposed DCL sheds light on addressing the problem of simultaneously learning structured input synthesis and weakly-supervised structured output prediction from scratch end-to-end.
590
$a
School code: 0155.
650
4
$a
Maps.
$3
544078
650
4
$a
Geography.
$3
524010
650
4
$a
Computer science.
$3
523869
650
4
$a
Artificial intelligence.
$3
516317
650
4
$a
Information science.
$3
554358
653
$a
Image synthesis
653
$a
Spatial networks
653
$a
Spatial attention
653
$a
Generative Adversarial Networks
653
$a
Reconfigurable GANs
653
$a
Deep Consensus Learning
690
$a
0984
690
$a
0366
690
$a
0800
690
$a
0635
690
$a
0723
710
2
$a
North Carolina State University.
$3
1018772
773
0
$t
Dissertations Abstracts International
$g
83-06B.
790
$a
0155
791
$a
Ph.D.
792
$a
2021
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28484310
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9474755
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入
(1)帳號:一般為「身分證號」;外籍生或交換生則為「學號」。 (2)密碼:預設為帳號末四碼。
帳號
.
密碼
.
請在此電腦上記得個人資料
取消
忘記密碼? (請注意!您必須已在系統登記E-mail信箱方能使用。)