Language:
English
繁體中文
Help
回圖書館首頁
手機版館藏查詢
Login
Back
Switch To:
Labeled
|
MARC Mode
|
ISBD
Learning to Generate 3D Training Data.
~
Yang, Dawei.
Linked to FindBook
Google Book
Amazon
博客來
Learning to Generate 3D Training Data.
Record Type:
Electronic resources : Monograph/item
Title/Author:
Learning to Generate 3D Training Data./
Author:
Yang, Dawei.
Published:
Ann Arbor : ProQuest Dissertations & Theses, : 2020,
Description:
97 p.
Notes:
Source: Dissertations Abstracts International, Volume: 82-07, Section: B.
Contained By:
Dissertations Abstracts International82-07B.
Subject:
Materials science. -
Online resource:
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28240370
ISBN:
9798684618659
Learning to Generate 3D Training Data.
Yang, Dawei.
Learning to Generate 3D Training Data.
- Ann Arbor : ProQuest Dissertations & Theses, 2020 - 97 p.
Source: Dissertations Abstracts International, Volume: 82-07, Section: B.
Thesis (Ph.D.)--University of Michigan, 2020.
This item must not be sold to any third party vendors.
Human-level visual 3D perception ability has long been pursued by researchers in computer vision, computer graphics, and robotics. Recent years have seen an emerging line of works using synthetic images to train deep networks for single image 3D perception. Synthetic images rendered by graphics engines are a promising source for training deep neural networks because it comes with perfect 3D ground truth for free. However, the 3D shapes and scenes to be rendered are largely made manual. Besides, it is challenging to ensure that synthetic images collected this way can help train a deep network to perform well on real images. This is because graphics generation pipelines require numerous design decisions such as the selection of 3D shapes and the placement of the camera.In this dissertation, we propose automatic generation pipelines of synthetic data that aim to improve the task performance of a trained network. We explore both supervised and unsupervised directions for automatic optimization of 3D decisions. For supervised learning, we demonstrate how to optimize 3D parameters such that a trained network can generalize well to real images. We first show that we can construct a pure synthetic 3D shape to achieve state-of-the-art performance on a shape-from-shading benchmark. We further parameterize the decisions as a vector and propose a hybrid gradient approach to efficiently optimize the vector towards usefulness. Our hybrid gradient is able to outperform classic black-box approaches on a wide selection of 3D perception tasks. For unsupervised learning, we propose a novelty metric for 3D parameter evolution based on deep autoregressive models. We show that without any extrinsic motivation, the novelty computed from autoregressive models alone is helpful. Our novelty metric can consistently encourage a random synthetic generator to produce more useful training data for downstream 3D perception tasks.
ISBN: 9798684618659Subjects--Topical Terms:
543314
Materials science.
Subjects--Index Terms:
Computer vision
Learning to Generate 3D Training Data.
LDR
:03295nmm a2200445 4500
001
2277017
005
20210510092502.5
008
220723s2020 ||||||||||||||||| ||eng d
020
$a
9798684618659
035
$a
(MiAaPQ)AAI28240370
035
$a
(MiAaPQ)umichrackham003391
035
$a
AAI28240370
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Yang, Dawei.
$3
3555322
245
1 0
$a
Learning to Generate 3D Training Data.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2020
300
$a
97 p.
500
$a
Source: Dissertations Abstracts International, Volume: 82-07, Section: B.
500
$a
Advisor: Deng, Jia;Fouhey, David Ford.
502
$a
Thesis (Ph.D.)--University of Michigan, 2020.
506
$a
This item must not be sold to any third party vendors.
506
$a
This item must not be added to any third party search indexes.
520
$a
Human-level visual 3D perception ability has long been pursued by researchers in computer vision, computer graphics, and robotics. Recent years have seen an emerging line of works using synthetic images to train deep networks for single image 3D perception. Synthetic images rendered by graphics engines are a promising source for training deep neural networks because it comes with perfect 3D ground truth for free. However, the 3D shapes and scenes to be rendered are largely made manual. Besides, it is challenging to ensure that synthetic images collected this way can help train a deep network to perform well on real images. This is because graphics generation pipelines require numerous design decisions such as the selection of 3D shapes and the placement of the camera.In this dissertation, we propose automatic generation pipelines of synthetic data that aim to improve the task performance of a trained network. We explore both supervised and unsupervised directions for automatic optimization of 3D decisions. For supervised learning, we demonstrate how to optimize 3D parameters such that a trained network can generalize well to real images. We first show that we can construct a pure synthetic 3D shape to achieve state-of-the-art performance on a shape-from-shading benchmark. We further parameterize the decisions as a vector and propose a hybrid gradient approach to efficiently optimize the vector towards usefulness. Our hybrid gradient is able to outperform classic black-box approaches on a wide selection of 3D perception tasks. For unsupervised learning, we propose a novelty metric for 3D parameter evolution based on deep autoregressive models. We show that without any extrinsic motivation, the novelty computed from autoregressive models alone is helpful. Our novelty metric can consistently encourage a random synthetic generator to produce more useful training data for downstream 3D perception tasks.
590
$a
School code: 0127.
650
4
$a
Materials science.
$3
543314
650
4
$a
Computer science.
$3
523869
650
4
$a
Artificial intelligence.
$3
516317
650
4
$a
Design.
$3
518875
650
4
$a
Information technology.
$3
532993
650
4
$a
Optics.
$3
517925
650
4
$a
Robotics.
$3
519753
653
$a
Computer vision
653
$a
Human-level visual 3D perception
653
$a
Computer graphics
653
$a
Synthetic data
653
$a
Deep neural networks
690
$a
0984
690
$a
0771
690
$a
0389
690
$a
0752
690
$a
0489
690
$a
0794
690
$a
0800
710
2
$a
University of Michigan.
$b
Computer Science & Engineering.
$3
3285590
773
0
$t
Dissertations Abstracts International
$g
82-07B.
790
$a
0127
791
$a
Ph.D.
792
$a
2020
793
$a
English
856
4 0
$u
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28240370
based on 0 review(s)
Location:
ALL
電子資源
Year:
Volume Number:
Items
1 records • Pages 1 •
1
Inventory Number
Location Name
Item Class
Material type
Call number
Usage Class
Loan Status
No. of reservations
Opac note
Attachments
W9428751
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
On shelf
0
1 records • Pages 1 •
1
Multimedia
Reviews
Add a review
and share your thoughts with other readers
Export
pickup library
Processing
...
Change password
Login