Language:
English
繁體中文
Help
回圖書館首頁
手機版館藏查詢
Login
Back
Switch To:
Labeled
|
MARC Mode
|
ISBD
Multimodal Reasoning of Visual Infor...
~
Chen, Kan.
Linked to FindBook
Google Book
Amazon
博客來
Multimodal Reasoning of Visual Information and Natural Language.
Record Type:
Electronic resources : Monograph/item
Title/Author:
Multimodal Reasoning of Visual Information and Natural Language./
Author:
Chen, Kan.
Published:
Ann Arbor : ProQuest Dissertations & Theses, : 2019,
Description:
138 p.
Notes:
Source: Dissertations Abstracts International, Volume: 80-12, Section: B.
Contained By:
Dissertations Abstracts International80-12B.
Subject:
Electrical engineering. -
Online resource:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=13807087
ISBN:
9781392170304
Multimodal Reasoning of Visual Information and Natural Language.
Chen, Kan.
Multimodal Reasoning of Visual Information and Natural Language.
- Ann Arbor : ProQuest Dissertations & Theses, 2019 - 138 p.
Source: Dissertations Abstracts International, Volume: 80-12, Section: B.
Thesis (Ph.D.)--University of Southern California, 2019.
This item must not be sold to any third party vendors.
Multimodal reasoning focuses on learning the correlation between different modalities presented in multimedia samples. It is a important task which have many applications in our daily lives, e.g., autonomous driving, robotics question answering, image retrieving engines. It is also a challenging task which is closely related to Machine Learning, Natural Language Processing and other research areas in Computer Sciences. Typically, multimodal reasoning can be divided into three levels: i) Coarse level treats each modality as a uniform sample and focus on learning inter-modal correlation. ii) Fine-grained level considers each modality's own characteristics and dive into fine-grained correlation learning. iii) Knowledge level leverages external knowledge to deal with more complex question-answering type reasoning.This thesis describes my solutions to three levels of multimodal reasoning. Most of the parts focus on the interaction between natural language modality and visual modality. The first part addresses the image retrieval problem which lies in the coarse level. We introduce attention mechanism to attend on useful information within each modality and weights each modality's importance which boosts the image retrieval performance.In the second part, we address the phrase grounding problem which is in the fine-grained level. We introduce regression mechanism, reinforcement learning techniques and multimodal consistency step by step, transfer from supervised learning scenario to weakly supervised scenario. All these above techniques bring concrete improvements in performance of phrase grounding.In the third part, we explore Visual Question Answering (VQA) problem in the knowledge level. Similar to human's behavior in VQA, we introduce the attention mechanism to attend on useful regions conditioned by input query's semantics, which filters out noise in visual modality for answering questions.Finally, we illustrate our recent efforts in other modalities' reasoning. We address the problem of generating sound waves from video content, where we consider fine-grained information and adopt perceptual loss to further polish generated sound waves. Besides, we provide some interesting problems which we plan to address in the future.This thesis demonstrates the effectiveness of all the algorithms on a range of publicly available video or image datasets.
ISBN: 9781392170304Subjects--Topical Terms:
649834
Electrical engineering.
Multimodal Reasoning of Visual Information and Natural Language.
LDR
:03443nmm a2200325 4500
001
2209041
005
20191025102639.5
008
201008s2019 ||||||||||||||||| ||eng d
020
$a
9781392170304
035
$a
(MiAaPQ)AAI13807087
035
$a
(MiAaPQ)usc:16175
035
$a
AAI13807087
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Chen, Kan.
$3
1672400
245
1 0
$a
Multimodal Reasoning of Visual Information and Natural Language.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2019
300
$a
138 p.
500
$a
Source: Dissertations Abstracts International, Volume: 80-12, Section: B.
500
$a
Publisher info.: Dissertation/Thesis.
500
$a
Advisor: Nevatia, Ram.
502
$a
Thesis (Ph.D.)--University of Southern California, 2019.
506
$a
This item must not be sold to any third party vendors.
520
$a
Multimodal reasoning focuses on learning the correlation between different modalities presented in multimedia samples. It is a important task which have many applications in our daily lives, e.g., autonomous driving, robotics question answering, image retrieving engines. It is also a challenging task which is closely related to Machine Learning, Natural Language Processing and other research areas in Computer Sciences. Typically, multimodal reasoning can be divided into three levels: i) Coarse level treats each modality as a uniform sample and focus on learning inter-modal correlation. ii) Fine-grained level considers each modality's own characteristics and dive into fine-grained correlation learning. iii) Knowledge level leverages external knowledge to deal with more complex question-answering type reasoning.This thesis describes my solutions to three levels of multimodal reasoning. Most of the parts focus on the interaction between natural language modality and visual modality. The first part addresses the image retrieval problem which lies in the coarse level. We introduce attention mechanism to attend on useful information within each modality and weights each modality's importance which boosts the image retrieval performance.In the second part, we address the phrase grounding problem which is in the fine-grained level. We introduce regression mechanism, reinforcement learning techniques and multimodal consistency step by step, transfer from supervised learning scenario to weakly supervised scenario. All these above techniques bring concrete improvements in performance of phrase grounding.In the third part, we explore Visual Question Answering (VQA) problem in the knowledge level. Similar to human's behavior in VQA, we introduce the attention mechanism to attend on useful regions conditioned by input query's semantics, which filters out noise in visual modality for answering questions.Finally, we illustrate our recent efforts in other modalities' reasoning. We address the problem of generating sound waves from video content, where we consider fine-grained information and adopt perceptual loss to further polish generated sound waves. Besides, we provide some interesting problems which we plan to address in the future.This thesis demonstrates the effectiveness of all the algorithms on a range of publicly available video or image datasets.
590
$a
School code: 0208.
650
4
$a
Electrical engineering.
$3
649834
650
4
$a
Computer science.
$3
523869
690
$a
0544
690
$a
0984
710
2
$a
University of Southern California.
$b
Electrical Engineering.
$3
1020963
773
0
$t
Dissertations Abstracts International
$g
80-12B.
790
$a
0208
791
$a
Ph.D.
792
$a
2019
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=13807087
based on 0 review(s)
Location:
ALL
電子資源
Year:
Volume Number:
Items
1 records • Pages 1 •
1
Inventory Number
Location Name
Item Class
Material type
Call number
Usage Class
Loan Status
No. of reservations
Opac note
Attachments
W9385590
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
On shelf
0
1 records • Pages 1 •
1
Multimedia
Reviews
Add a review
and share your thoughts with other readers
Export
pickup library
Processing
...
Change password
Login