語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Real-Time Exploration of Photorealis...
~
Ruckert, Darius.
FindBook
Google Book
Amazon
博客來
Real-Time Exploration of Photorealistic Virtual Environments = = Echtzeiterkundung von Photorealistischen Virtuellen Welten.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Real-Time Exploration of Photorealistic Virtual Environments =/
其他題名:
Echtzeiterkundung von Photorealistischen Virtuellen Welten.
作者:
Ruckert, Darius.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2023,
面頁冊數:
113 p.
附註:
Source: Dissertations Abstracts International, Volume: 85-01, Section: B.
Contained By:
Dissertations Abstracts International85-01B.
標題:
Augmented reality. -
電子資源:
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=30541148
ISBN:
9798379876166
Real-Time Exploration of Photorealistic Virtual Environments = = Echtzeiterkundung von Photorealistischen Virtuellen Welten.
Ruckert, Darius.
Real-Time Exploration of Photorealistic Virtual Environments =
Echtzeiterkundung von Photorealistischen Virtuellen Welten. - Ann Arbor : ProQuest Dissertations & Theses, 2023 - 113 p.
Source: Dissertations Abstracts International, Volume: 85-01, Section: B.
Thesis (D.Eng.)--Friedrich-Alexander-Universitaet Erlangen-Nuernberg (Germany), 2023.
This item must not be sold to any third party vendors.
In the last decade, many academic and industrial research groups around the globe are focusing on virtual and augmented reality (XR), leading to rapid progress of the field. Next to incremental hardware advances, new software techniques also play an important role in the recent success. The software improvements range from new tracking algorithms, which allow a more accurate and robust localization of the XR headset, to new rendering engines, which are able to render photo realistic environments in real time.In the first half of this thesis, I present my work on camera tracking of low-power mobile devices such as XR headsets. The proposed tracking pipeline uses several algorithmic tricks to reduce the computational complexity. For example, I present a novel decoupled formulation for visual inertial bundle adjustment, which makes the optimization more efficient and can be run in parallel. Furthermore, I show how recursive matrix algebra can be used to speed up the nonlinear optimization problems of a typical tracking pipeline. Overall, the proposed pipeline achieves a similar or better accuracy than the state-of-the-art while being substantially faster. On integrated or low-power computers, my method can process over 60 frames per second, which exceeds the frame rate of commodity cameras by a factor of two.Later in this thesis, I show how the output of my tracking system can be used to generate high-quality RGB and depth images from arbitrary locations in the scene. The proposed method renders triangulated depth images of keyframes to a target view and fuses them in the fragment shader. This approach is very efficient and allows large scene updates of the tracking system since no global volumetric model is built. AR applications can use the resulting images to visualize the scene or display virtual objects with correct occlusion.Finally, I present Approximate Differentiable One-Pixel Point Rendering(ADOP), a novel point-based neural rendering approach for real-time novel view synthesis. The input is an initial reconstruction of a scene using standard photogrammetry software. During a short training stage, neural point descriptors are learned as well as the parameters of a rendering network and a tone-mapper. After that we are able to synthesize photo-realistic views of these scenes at arbitrary camera locations. Due to a novel differentiable point rasterizer, we are also able to optimize the initial camera parameters and point cloud provided by the photogrammetry software. In several experiments, I show that this input optimization can significantly improve the image quality and make ADOP to one of the best performing neural rendering approaches.
ISBN: 9798379876166Subjects--Topical Terms:
1620831
Augmented reality.
Real-Time Exploration of Photorealistic Virtual Environments = = Echtzeiterkundung von Photorealistischen Virtuellen Welten.
LDR
:06927nmm a2200349 4500
001
2397232
005
20240617111352.5
006
m o d
007
cr#unu||||||||
008
251215s2023 ||||||||||||||||| ||eng d
020
$a
9798379876166
035
$a
(MiAaPQ)AAI30541148
035
$a
(MiAaPQ)FAUErlangen22786
035
$a
AAI30541148
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Ruckert, Darius.
$3
3766993
245
1 0
$a
Real-Time Exploration of Photorealistic Virtual Environments =
$b
Echtzeiterkundung von Photorealistischen Virtuellen Welten.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2023
300
$a
113 p.
500
$a
Source: Dissertations Abstracts International, Volume: 85-01, Section: B.
500
$a
Advisor: Stamminger, Marc.
502
$a
Thesis (D.Eng.)--Friedrich-Alexander-Universitaet Erlangen-Nuernberg (Germany), 2023.
506
$a
This item must not be sold to any third party vendors.
520
$a
In the last decade, many academic and industrial research groups around the globe are focusing on virtual and augmented reality (XR), leading to rapid progress of the field. Next to incremental hardware advances, new software techniques also play an important role in the recent success. The software improvements range from new tracking algorithms, which allow a more accurate and robust localization of the XR headset, to new rendering engines, which are able to render photo realistic environments in real time.In the first half of this thesis, I present my work on camera tracking of low-power mobile devices such as XR headsets. The proposed tracking pipeline uses several algorithmic tricks to reduce the computational complexity. For example, I present a novel decoupled formulation for visual inertial bundle adjustment, which makes the optimization more efficient and can be run in parallel. Furthermore, I show how recursive matrix algebra can be used to speed up the nonlinear optimization problems of a typical tracking pipeline. Overall, the proposed pipeline achieves a similar or better accuracy than the state-of-the-art while being substantially faster. On integrated or low-power computers, my method can process over 60 frames per second, which exceeds the frame rate of commodity cameras by a factor of two.Later in this thesis, I show how the output of my tracking system can be used to generate high-quality RGB and depth images from arbitrary locations in the scene. The proposed method renders triangulated depth images of keyframes to a target view and fuses them in the fragment shader. This approach is very efficient and allows large scene updates of the tracking system since no global volumetric model is built. AR applications can use the resulting images to visualize the scene or display virtual objects with correct occlusion.Finally, I present Approximate Differentiable One-Pixel Point Rendering(ADOP), a novel point-based neural rendering approach for real-time novel view synthesis. The input is an initial reconstruction of a scene using standard photogrammetry software. During a short training stage, neural point descriptors are learned as well as the parameters of a rendering network and a tone-mapper. After that we are able to synthesize photo-realistic views of these scenes at arbitrary camera locations. Due to a novel differentiable point rasterizer, we are also able to optimize the initial camera parameters and point cloud provided by the photogrammetry software. In several experiments, I show that this input optimization can significantly improve the image quality and make ADOP to one of the best performing neural rendering approaches.
520
$a
Im letzten Jahrzehnt haben sich viele akademische und industrielle Forschungsgruppen auf virtuelle und erweiterte Realitat (XR) konzentriert, was zu einem raschen Fortschritt in diesem Bereich gefuhrt hat. Neben inkrementellen Hardware-Fortschritten spielen auch neue Softwaretechniken eine wichtige Rolle fur den jungsten Erfolg. Die Software-Verbesserungen reichen von neuen Tracking-Algorithmen, die eine genauere und robustere Lokalisierung des XR-Headsets ermoglichen, bis hin zu neuen Rendering-Engines, die in der Lage sind, fotorealistische Umgebungen in Echtzeit darzustellen.In der ersten Halfte dieser Arbeit stellen ich meine Forschungsergebnisse zur Kamera Lokalisierung von effizienten mobilen Geraten wie XR-Headsets vor. Die vorgeschlagene Tracking-Pipeline verwendet mehrere algorithmische Tricks, um die Rechenkomplexitat zu reduzieren. So stellen ich beispielsweise eine neuartige entkoppelte Formulierung fur das inertial-visuelle Bundleadjustment vor, welche die Berechnung effizienter macht und parallel ausgefuhrt werden kann. Daruber hinaus zeige ich, wie rekursive Matrix Algebra verwendet werden kann, um die nichtlinearen Optimierungsprobleme dieser Pipeline zu beschleunigen. Insgesamt erreicht unsere Methode eine ahnliche oder bessere Genauigkeit als der Stand der Technik und ist dabei wesentlich schneller. Auf integrierten Computern mit geringem Stromverbrauch kann unsere Methode mehr als 60 Bilder pro Sekunde verarbeiten, was die Bildrate handelsublicher Kameras um den Faktor zwei ubersteigt.Im weiteren Verlauf dieser Arbeit zeige ich, wie die Ergebnisse dieser Trackingpipeline verwendet werden konnen, um qualitativ hochwertige RGB- und Tiefenbilder von beliebigen Orten in der Szene zu erzeugen. Die vorgeschlagene Methode rendert triangulierte Tiefenbilder von Keyframes zu einer Zielansicht und verschmilzt sie im Fragmentshader. Dieser Ansatz ist sehr effizient und ermoglicht umfangreiche Szenenaktualisierungen des Trackingsystems, da kein globales volumetrisches Modell erstellt wird. AR-Anwendungen konnen die resultierenden Bilder verwenden, um die Szene zu visualisieren oder virtuelle Objekte mit korrekter Verdeckung anzuzeigen.Schlieslich stelle ich Approximate Differentiable One-Pixel Point Rendering(ADOP) vor - einen neuartigen punktbasierten neuronalen Rendering-Ansatz. Die Eingabe ist eine initiale Rekonstruktion einer Szene, die zuvor mit Hilfe von Photogrammetrie erstellt wurde. Daraufhin werden wahrend einer kurzen Trainingsphase die neuronalen Punktdeskriptoren, die Parameter eines Rendering-Netzwerks und ein Tone-Mapping-Modell gelernt. Danach sind wir in der Lage, fotorealistische Ansichten dieser Szenen an beliebigen Kamerapositionen zu synthetisieren. Dank eines neuartigen differenzierbaren Punktrasterisieres ist es zusatzlich moglich, die anfanglichen Kameraparameter und die Punktwolke zu optimieren. In mehreren Experimenten zeigen wir, dass diese Eingabeoptimierung die Bildqualitat deutlich verbessert und ADOP eines der aktuell besten neuronalen Rendering-Ansatze ist.
590
$a
School code: 0575.
650
4
$a
Augmented reality.
$3
1620831
650
4
$a
Global positioning systems--GPS.
$3
3559357
650
4
$a
Cameras.
$3
524039
650
4
$a
Virtual communities.
$3
3564377
650
4
$a
Art.
$3
516292
650
4
$a
Photographs.
$3
627415
650
4
$a
Virtual reality.
$3
527460
650
4
$a
Sensors.
$3
3549539
650
4
$a
Aerospace engineering.
$3
1002622
650
4
$a
Information technology.
$3
532993
690
$a
0538
690
$a
0489
710
2
$a
Friedrich-Alexander-Universitaet Erlangen-Nuernberg (Germany).
$3
3564652
773
0
$t
Dissertations Abstracts International
$g
85-01B.
790
$a
0575
791
$a
D.Eng.
792
$a
2023
793
$a
English
856
4 0
$u
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=30541148
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9505552
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入