語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
FindBook
Google Book
Amazon
博客來
Object Detection for Autonomous Systems Operating Under Challenging Conditions.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Object Detection for Autonomous Systems Operating Under Challenging Conditions./
作者:
Hnewa, Mazin.
面頁冊數:
1 online resource (103 pages)
附註:
Source: Dissertations Abstracts International, Volume: 84-08, Section: B.
Contained By:
Dissertations Abstracts International84-08B.
標題:
Engineering. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=30250384click for full text (PQDT)
ISBN:
9798374412635
Object Detection for Autonomous Systems Operating Under Challenging Conditions.
Hnewa, Mazin.
Object Detection for Autonomous Systems Operating Under Challenging Conditions.
- 1 online resource (103 pages)
Source: Dissertations Abstracts International, Volume: 84-08, Section: B.
Thesis (Ph.D.)--Michigan State University, 2023.
Includes bibliographical references
Advanced Driver-Assistance Systems (ADAS) and autonomous systems, in general, such as emerging autonomous vehicles rely heavily on visual data and state-of-the-art deep learning approaches to classify and localize objects such as pedestrians, traffic signs and lights, and other nearby cars, to assist the corresponding vehicles maneuver safely in their environments. However, due to the well-known domain shift problem, the performance of object detection methods could degrade rather significantly under challenging scenarios such as low light and adverse weather conditions. The domain shift problem arises due to the difference between the distributions of source data used for training in comparison with target data used during realistic testing scenarios. The area of domain adaptation has been instrumental in addressing the domain shift problem encountered by many applications. In fact, domain adaptation frameworks for object detection methods have been providing powerful tools for handling a variety of underlying changes in probability distribution between training and testing data. In this dissertation, we first propose a novel integrated Generative-model based unsupervised training and Domain Adaptation (GDA) framework that improves the performance of a region-proposal based object detector under challenging scenarios. In particular, we exploit unsupervised image-to-image translation to generate annotated visuals that are representatives of a target challenging domain. Then, we use these generated annotated visuals in addition to unlabeled target domain data to train a domain adaptive region-proposal based object detector. We show that using this integrated approach outperforms both methods, unsupervised image translation, and domain adaptation, when they are used separately. Despite the popularity of region-proposal based object detectors, such as Faster R-CNN and many of its variants, these detectors suffer from a long inference time. Therefore, such approaches are not the optimal choice for time-critical, real-time applications such as autonomous driving. As a result, in the second part of this dissertation, we propose a novel MultiScale Domain Adaptive YOLO (MS-DAYOLO) framework for the popular state-of-the-art real time object detector YOLO. MS-DAYOLO employs multiple domain adaptation paths and corresponding domain classifiers at different scales of the recently introduced YOLOv4 object detector. Building on our baseline MS-DAYOLO architecture, we introduce three novel deep learning architectures for a Domain Adaptation Network (DAN) that generates domain-invariant features. In particular, we propose a Progressive Feature Reduction, a Unified Domain Classifier, and an Integrated architecture.While RGB cameras represent the most popular imaging sensors used by ADAS systems and autonomous vehicles due to cost and related practical reasons, employing other modalities such as thermal and gated imaging sensors can significantly improve the detection performance under challenging conditions. However, these other types of sensors are expensive, and incorporating them into ADAS and autonomous vehicle platforms may cause design and manufacturing challenges. As a result, in the third part of this dissertation, we propose a new framework that utilizes Cross Modality Knowledge Distillation (CMKD) to improve the performance of RGB-only pedestrian detection in low light and adverse weather conditions without increasing computational complexity during inference. Specifically, we develop two CMKD methods that rely on feature-based knowledge distillation and adversarial training to transfer knowledge from a detector (teacher) that is trained using multiple modalities to a single modality detector (student) that is trained using RGB images only. To validate the proposed approaches, we train and test them using popular datasets captured by vehicles driving under different conditions including challenging scenarios. Our experiments with the proposed approaches show significant improvements in object detection performance in comparison with state-of-the-art methods.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2023
Mode of access: World Wide Web
ISBN: 9798374412635Subjects--Topical Terms:
586835
Engineering.
Subjects--Index Terms:
AutonomousIndex Terms--Genre/Form:
542853
Electronic books.
Object Detection for Autonomous Systems Operating Under Challenging Conditions.
LDR
:05512nmm a2200409K 4500
001
2362668
005
20231102122753.5
006
m o d
007
cr mn ---uuuuu
008
241011s2023 xx obm 000 0 eng d
020
$a
9798374412635
035
$a
(MiAaPQ)AAI30250384
035
$a
AAI30250384
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
$d
NTU
100
1
$a
Hnewa, Mazin.
$3
3703411
245
1 0
$a
Object Detection for Autonomous Systems Operating Under Challenging Conditions.
264
0
$c
2023
300
$a
1 online resource (103 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Dissertations Abstracts International, Volume: 84-08, Section: B.
500
$a
Advisor: Radha, Hayder.
502
$a
Thesis (Ph.D.)--Michigan State University, 2023.
504
$a
Includes bibliographical references
520
$a
Advanced Driver-Assistance Systems (ADAS) and autonomous systems, in general, such as emerging autonomous vehicles rely heavily on visual data and state-of-the-art deep learning approaches to classify and localize objects such as pedestrians, traffic signs and lights, and other nearby cars, to assist the corresponding vehicles maneuver safely in their environments. However, due to the well-known domain shift problem, the performance of object detection methods could degrade rather significantly under challenging scenarios such as low light and adverse weather conditions. The domain shift problem arises due to the difference between the distributions of source data used for training in comparison with target data used during realistic testing scenarios. The area of domain adaptation has been instrumental in addressing the domain shift problem encountered by many applications. In fact, domain adaptation frameworks for object detection methods have been providing powerful tools for handling a variety of underlying changes in probability distribution between training and testing data. In this dissertation, we first propose a novel integrated Generative-model based unsupervised training and Domain Adaptation (GDA) framework that improves the performance of a region-proposal based object detector under challenging scenarios. In particular, we exploit unsupervised image-to-image translation to generate annotated visuals that are representatives of a target challenging domain. Then, we use these generated annotated visuals in addition to unlabeled target domain data to train a domain adaptive region-proposal based object detector. We show that using this integrated approach outperforms both methods, unsupervised image translation, and domain adaptation, when they are used separately. Despite the popularity of region-proposal based object detectors, such as Faster R-CNN and many of its variants, these detectors suffer from a long inference time. Therefore, such approaches are not the optimal choice for time-critical, real-time applications such as autonomous driving. As a result, in the second part of this dissertation, we propose a novel MultiScale Domain Adaptive YOLO (MS-DAYOLO) framework for the popular state-of-the-art real time object detector YOLO. MS-DAYOLO employs multiple domain adaptation paths and corresponding domain classifiers at different scales of the recently introduced YOLOv4 object detector. Building on our baseline MS-DAYOLO architecture, we introduce three novel deep learning architectures for a Domain Adaptation Network (DAN) that generates domain-invariant features. In particular, we propose a Progressive Feature Reduction, a Unified Domain Classifier, and an Integrated architecture.While RGB cameras represent the most popular imaging sensors used by ADAS systems and autonomous vehicles due to cost and related practical reasons, employing other modalities such as thermal and gated imaging sensors can significantly improve the detection performance under challenging conditions. However, these other types of sensors are expensive, and incorporating them into ADAS and autonomous vehicle platforms may cause design and manufacturing challenges. As a result, in the third part of this dissertation, we propose a new framework that utilizes Cross Modality Knowledge Distillation (CMKD) to improve the performance of RGB-only pedestrian detection in low light and adverse weather conditions without increasing computational complexity during inference. Specifically, we develop two CMKD methods that rely on feature-based knowledge distillation and adversarial training to transfer knowledge from a detector (teacher) that is trained using multiple modalities to a single modality detector (student) that is trained using RGB images only. To validate the proposed approaches, we train and test them using popular datasets captured by vehicles driving under different conditions including challenging scenarios. Our experiments with the proposed approaches show significant improvements in object detection performance in comparison with state-of-the-art methods.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2023
538
$a
Mode of access: World Wide Web
650
4
$a
Engineering.
$3
586835
650
4
$a
Computer science.
$3
523869
650
4
$a
Automotive engineering.
$3
2181195
650
4
$a
Electrical engineering.
$3
649834
653
$a
Autonomous
653
$a
Domain shift
653
$a
Generative-model
653
$a
Object detector
653
$a
RGB images
655
7
$a
Electronic books.
$2
lcsh
$3
542853
690
$a
0537
690
$a
0544
690
$a
0984
690
$a
0540
710
2
$a
ProQuest Information and Learning Co.
$3
783688
710
2
$a
Michigan State University.
$b
Electrical Engineering - Doctor of Philosophy.
$3
2094234
773
0
$t
Dissertations Abstracts International
$g
84-08B.
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=30250384
$z
click for full text (PQDT)
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9485024
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入