語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
FindBook
Google Book
Amazon
博客來
Attacks and Defenses on Autonomous Vehicles : = From Sensor Perception to Control Area Networks.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Attacks and Defenses on Autonomous Vehicles :/
其他題名:
From Sensor Perception to Control Area Networks.
作者:
Man, Yanmao.
面頁冊數:
1 online resource (223 pages)
附註:
Source: Dissertations Abstracts International, Volume: 84-02, Section: B.
Contained By:
Dissertations Abstracts International84-02B.
標題:
Computer engineering. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=29325996click for full text (PQDT)
ISBN:
9798841743880
Attacks and Defenses on Autonomous Vehicles : = From Sensor Perception to Control Area Networks.
Man, Yanmao.
Attacks and Defenses on Autonomous Vehicles :
From Sensor Perception to Control Area Networks. - 1 online resource (223 pages)
Source: Dissertations Abstracts International, Volume: 84-02, Section: B.
Thesis (Ph.D.)--The University of Arizona, 2022.
Includes bibliographical references
Autonomous driving has been a focus in both industry and academia. The autonomous vehicle decision-making pipeline is typically comprised of several modules from perceiving the physical world to interacting with it. The perception module aims to understand the surrounding environment such as obstacles, lanes, traffic signals, etc. This high-level information is passed to the planning module which generates decisions such as acceleration, turning right, etc. These decisions are made also based on a prediction module that estimates the future trajectories of obstacles for precaution purposes. The control module translates the decisions into low-level instructions, transmitted on the control area network (CAN) bus in the form of CAN messages. Finally, the actuation module executes these instructions.Since autonomous vehicles normally operate at high speed and usually carry humans, their safety and security are of great importance. Recently, a number of papers raise the security concerns with various attacks. Since the perception module leverages deep learning models for object detection, traffic sign recognition, etc., it is inherently subject to adversarial examples that are specially crafted to deceive neural networks. More severely, those physically-realizable adversarial examples/patches that can be implemented externally using printed stickers or projectors, can bypass the digital protection of the autonomous systems thus being more practical, more stealthy, and more difficult to defend against. On the other hand, due to the lack of secure authentication, the CAN protocol has been demonstrated to be susceptible to ECU (electronic control unit) impersonation attacks where an attack-compromised ECU can broadcast forged CAN messages in order for dangerous actuation, such as deploying air bags on a highway even under normal driving circumstances.In this dissertation, we study the security of autonomous vehicles from sensor perception to the CAN bus. We propose attacks that nullify a broad category of state-of-the-art (SOTA) defenses, and develop our own defenses that can be generalized to defeat different attack methodologies. In particular, for the perception module we develop a visible light-based, system-aware camera attacks, termed GhostImage that can be realized physically and remotely (Chapter 3). We exploit the ghost effect of the camera system to convey adversarial noise that is not norm-bounded, thus bypassing SOTA adversarial example defenses. To detect perception attacks, we propose to adopt the idea of spatio-temporal consistency, which is demonstrated using two different methods: one is model-based (Chapter 4) for detecting ghost-based camera attacks, and the other is data-driven (Chapter 5) in that we can detect object misclassification attacks effectively and efficiently, meanwhile our algorithm is agnostic to different attack methodologies as well as different object detection and tracking systems. In Chapter 6, we investigate the planning module and enhance the adversarial robustness of the obstacle trajectory prediction. Finally in Chapter 7, to evaluate the control and actuation modules we propose a Hill-climbing-style attack that defeats SOTA CAN busintrusion detection systems that are based on multiple CAN frames.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2023
Mode of access: World Wide Web
ISBN: 9798841743880Subjects--Topical Terms:
621879
Computer engineering.
Subjects--Index Terms:
Adversarial machine learningIndex Terms--Genre/Form:
542853
Electronic books.
Attacks and Defenses on Autonomous Vehicles : = From Sensor Perception to Control Area Networks.
LDR
:04696nmm a2200397K 4500
001
2359559
005
20230917195745.5
006
m o d
007
cr mn ---uuuuu
008
241011s2022 xx obm 000 0 eng d
020
$a
9798841743880
035
$a
(MiAaPQ)AAI29325996
035
$a
AAI29325996
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
$d
NTU
100
1
$a
Man, Yanmao.
$3
3700166
245
1 0
$a
Attacks and Defenses on Autonomous Vehicles :
$b
From Sensor Perception to Control Area Networks.
264
0
$c
2022
300
$a
1 online resource (223 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Dissertations Abstracts International, Volume: 84-02, Section: B.
500
$a
Advisor: Li, Ming.
502
$a
Thesis (Ph.D.)--The University of Arizona, 2022.
504
$a
Includes bibliographical references
520
$a
Autonomous driving has been a focus in both industry and academia. The autonomous vehicle decision-making pipeline is typically comprised of several modules from perceiving the physical world to interacting with it. The perception module aims to understand the surrounding environment such as obstacles, lanes, traffic signals, etc. This high-level information is passed to the planning module which generates decisions such as acceleration, turning right, etc. These decisions are made also based on a prediction module that estimates the future trajectories of obstacles for precaution purposes. The control module translates the decisions into low-level instructions, transmitted on the control area network (CAN) bus in the form of CAN messages. Finally, the actuation module executes these instructions.Since autonomous vehicles normally operate at high speed and usually carry humans, their safety and security are of great importance. Recently, a number of papers raise the security concerns with various attacks. Since the perception module leverages deep learning models for object detection, traffic sign recognition, etc., it is inherently subject to adversarial examples that are specially crafted to deceive neural networks. More severely, those physically-realizable adversarial examples/patches that can be implemented externally using printed stickers or projectors, can bypass the digital protection of the autonomous systems thus being more practical, more stealthy, and more difficult to defend against. On the other hand, due to the lack of secure authentication, the CAN protocol has been demonstrated to be susceptible to ECU (electronic control unit) impersonation attacks where an attack-compromised ECU can broadcast forged CAN messages in order for dangerous actuation, such as deploying air bags on a highway even under normal driving circumstances.In this dissertation, we study the security of autonomous vehicles from sensor perception to the CAN bus. We propose attacks that nullify a broad category of state-of-the-art (SOTA) defenses, and develop our own defenses that can be generalized to defeat different attack methodologies. In particular, for the perception module we develop a visible light-based, system-aware camera attacks, termed GhostImage that can be realized physically and remotely (Chapter 3). We exploit the ghost effect of the camera system to convey adversarial noise that is not norm-bounded, thus bypassing SOTA adversarial example defenses. To detect perception attacks, we propose to adopt the idea of spatio-temporal consistency, which is demonstrated using two different methods: one is model-based (Chapter 4) for detecting ghost-based camera attacks, and the other is data-driven (Chapter 5) in that we can detect object misclassification attacks effectively and efficiently, meanwhile our algorithm is agnostic to different attack methodologies as well as different object detection and tracking systems. In Chapter 6, we investigate the planning module and enhance the adversarial robustness of the obstacle trajectory prediction. Finally in Chapter 7, to evaluate the control and actuation modules we propose a Hill-climbing-style attack that defeats SOTA CAN busintrusion detection systems that are based on multiple CAN frames.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2023
538
$a
Mode of access: World Wide Web
650
4
$a
Computer engineering.
$3
621879
650
4
$a
Computer science.
$3
523869
650
4
$a
Automotive engineering.
$3
2181195
653
$a
Adversarial machine learning
653
$a
Autonomous driving
653
$a
Computer security
653
$a
Cyber-physical system
653
$a
Perception
655
7
$a
Electronic books.
$2
lcsh
$3
542853
690
$a
0464
690
$a
0984
690
$a
0540
710
2
$a
ProQuest Information and Learning Co.
$3
783688
710
2
$a
The University of Arizona.
$b
Electrical & Computer Engineering.
$3
1018545
773
0
$t
Dissertations Abstracts International
$g
84-02B.
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=29325996
$z
click for full text (PQDT)
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9481915
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入