語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Linked to FindBook
Google Book
Amazon
博客來
Interpretable Features for Distinguishing Machine Generated News Articles.
Record Type:
Electronic resources : Monograph/item
Title/Author:
Interpretable Features for Distinguishing Machine Generated News Articles./
Author:
Karumuri, Ravi Teja.
Published:
Ann Arbor : ProQuest Dissertations & Theses, : 2022,
Description:
56 p.
Notes:
Source: Masters Abstracts International, Volume: 83-11.
Contained By:
Masters Abstracts International83-11.
Subject:
Computer science. -
ISBN:
9798802709351
Interpretable Features for Distinguishing Machine Generated News Articles.
Karumuri, Ravi Teja.
Interpretable Features for Distinguishing Machine Generated News Articles.
- Ann Arbor : ProQuest Dissertations & Theses, 2022 - 56 p.
Source: Masters Abstracts International, Volume: 83-11.
Thesis (M.S.)--Arizona State University, 2022.
This item must not be sold to any third party vendors.
Social media has become a primary means of communication and a prominent source of information about day-to-day happenings in the contemporary world. The rise in the popularity of social media platforms in recent decades has empowered people with an unprecedented level of connectivity. Despite the benefits social media offers, it also comes with disadvantages. A significant downside to staying connected via social media is the susceptibility to falsified information or Fake News. Easy accessibility to social media and lack of truth verification tools favored the miscreants on online platforms to spread false propaganda at scale, ensuing chaos. The spread of misinformation on these platforms ultimately leads to mistrust and social unrest. Consequently, there is a need to counter the spread of misinformation which could otherwise have a detrimental impact on society. A notable example of such a case is the 2019 Covid pandemic misinformation spread, where coordinated misinformation campaigns misled the public on vaccination and health safety.The advancements in Natural Language Processing gave rise to sophisticated language generation models that can generate realistic-looking texts. Although the current Fake News generation process is manual, it is just a matter of time before this process gets automated at scale and generates Neural Fake News using language generation models like the Bidirectional Encoder Representations from Transformers (BERT) and the third generation Generative Pre-trained Transformer (GPT-3). Moreover, given that the current state of fact verification is manual, it calls for an urgent need to develop reliable automated detection tools to counter Neural Fake News generated at scale. Existing tools demonstrate state-of-the-art performance in detecting Neural Fake News but exhibit a black box behavior. Incorporating explainability into the Neural Fake News classification task will build trust and acceptance amongst different communities and decision-makers. Therefore, the current study proposes a new set of interpretable discriminatory features. These features capture statistical and stylistic idiosyncrasies, achieving an accuracy of 82% on Neural Fake News classification. Furthermore, this research investigates essential dependency relations contributing to the classification process. Lastly, the study concludes by providing directions for future research in building explainable tools for Neural Fake News detection.
ISBN: 9798802709351Subjects--Topical Terms:
523869
Computer science.
Subjects--Index Terms:
Explainable AI
Interpretable Features for Distinguishing Machine Generated News Articles.
LDR
:03552nmm a2200385 4500
001
2349749
005
20221003074945.5
008
241004s2022 eng d
020
$a
9798802709351
035
$a
(MiAaPQ)AAI29167827
035
$a
AAI29167827
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Karumuri, Ravi Teja.
$3
3689165
245
1 0
$a
Interpretable Features for Distinguishing Machine Generated News Articles.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2022
300
$a
56 p.
500
$a
Source: Masters Abstracts International, Volume: 83-11.
500
$a
Advisor: Liu, Huan.
502
$a
Thesis (M.S.)--Arizona State University, 2022.
506
$a
This item must not be sold to any third party vendors.
520
$a
Social media has become a primary means of communication and a prominent source of information about day-to-day happenings in the contemporary world. The rise in the popularity of social media platforms in recent decades has empowered people with an unprecedented level of connectivity. Despite the benefits social media offers, it also comes with disadvantages. A significant downside to staying connected via social media is the susceptibility to falsified information or Fake News. Easy accessibility to social media and lack of truth verification tools favored the miscreants on online platforms to spread false propaganda at scale, ensuing chaos. The spread of misinformation on these platforms ultimately leads to mistrust and social unrest. Consequently, there is a need to counter the spread of misinformation which could otherwise have a detrimental impact on society. A notable example of such a case is the 2019 Covid pandemic misinformation spread, where coordinated misinformation campaigns misled the public on vaccination and health safety.The advancements in Natural Language Processing gave rise to sophisticated language generation models that can generate realistic-looking texts. Although the current Fake News generation process is manual, it is just a matter of time before this process gets automated at scale and generates Neural Fake News using language generation models like the Bidirectional Encoder Representations from Transformers (BERT) and the third generation Generative Pre-trained Transformer (GPT-3). Moreover, given that the current state of fact verification is manual, it calls for an urgent need to develop reliable automated detection tools to counter Neural Fake News generated at scale. Existing tools demonstrate state-of-the-art performance in detecting Neural Fake News but exhibit a black box behavior. Incorporating explainability into the Neural Fake News classification task will build trust and acceptance amongst different communities and decision-makers. Therefore, the current study proposes a new set of interpretable discriminatory features. These features capture statistical and stylistic idiosyncrasies, achieving an accuracy of 82% on Neural Fake News classification. Furthermore, this research investigates essential dependency relations contributing to the classification process. Lastly, the study concludes by providing directions for future research in building explainable tools for Neural Fake News detection.
590
$a
School code: 0010.
650
4
$a
Computer science.
$3
523869
650
4
$a
Journalism.
$3
576107
650
4
$a
Rhetoric.
$3
516647
650
4
$a
Web studies.
$3
2122754
650
4
$a
Artificial intelligence.
$3
516317
653
$a
Explainable AI
653
$a
Fake news
653
$a
Interpretability
653
$a
Neural fake news
653
$a
Social media
690
$a
0984
690
$a
0646
690
$a
0800
690
$a
0391
690
$a
0681
710
2 0
$a
Arizona State University.
$b
Computer Science.
$3
1676136
773
0
$t
Masters Abstracts International
$g
83-11.
790
$a
0010
791
$a
M.S.
792
$a
2022
793
$a
English
based on 0 review(s)
Location:
全部
電子資源
Year:
Volume Number:
Items
1 records • Pages 1 •
1
Inventory Number
Location Name
Item Class
Material type
Call number
Usage Class
Loan Status
No. of reservations
Opac note
Attachments
W9472187
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
On shelf
0
1 records • Pages 1 •
1
Reviews
Add a review
and share your thoughts with other readers
Export
pickup library
Processing
...
Change password
Login