語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
FindBook
Google Book
Amazon
博客來
Responsibility Internalism and Responsibility for AI.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Responsibility Internalism and Responsibility for AI./
作者:
Demirtas, Huzeyfe.
面頁冊數:
1 online resource (146 pages)
附註:
Source: Dissertations Abstracts International, Volume: 84-12, Section: B.
Contained By:
Dissertations Abstracts International84-12B.
標題:
Philosophy. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=30423663click for full text (PQDT)
ISBN:
9798379704070
Responsibility Internalism and Responsibility for AI.
Demirtas, Huzeyfe.
Responsibility Internalism and Responsibility for AI.
- 1 online resource (146 pages)
Source: Dissertations Abstracts International, Volume: 84-12, Section: B.
Thesis (Ph.D.)--Syracuse University, 2023.
Includes bibliographical references
I argue for responsibility internalism. That is, moral responsibility (i.e., accountability, or being apt for praise or blame) depends only on factors internal to agents. Employing this view, I also argue that no one is responsible for what AI does but this isn't morally problematic in a way that counts against developing or using AI.Responsibility is grounded in three potential conditions: the control (or freedom) condition, the epistemic (or awareness) condition, and the causal responsibility condition (or consequences). I argue that causal responsibility is irrelevant for moral responsibility, and that the control condition and the epistemic condition depend only on factors internal to agents. Moreover, since what AI does is at best a consequence of our actions, and the consequences of our actions are irrelevant to our responsibility, no one is responsible for what AI does. That is, the so-called responsibility gap exits. However, this isn't morally worrisome for developing or using AI. Firstly, I argue, current AI doesn't generate a new kind of concern about responsibility that the older technologies don't. Then, I argue that responsibility gap is not worrisome because neither responsibility gap, nor my argument for its existence, entails that no one can be justly punished, held accountable, or incurs duties in reparations when AI causes a harm.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2023
Mode of access: World Wide Web
ISBN: 9798379704070Subjects--Topical Terms:
516511
Philosophy.
Subjects--Index Terms:
Using AIIndex Terms--Genre/Form:
542853
Electronic books.
Responsibility Internalism and Responsibility for AI.
LDR
:02744nmm a2200409K 4500
001
2363665
005
20231127093614.5
006
m o d
007
cr mn ---uuuuu
008
241011s2023 xx obm 000 0 eng d
020
$a
9798379704070
035
$a
(MiAaPQ)AAI30423663
035
$a
AAI30423663
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
$d
NTU
100
1
$a
Demirtas, Huzeyfe.
$3
3704441
245
1 0
$a
Responsibility Internalism and Responsibility for AI.
264
0
$c
2023
300
$a
1 online resource (146 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Dissertations Abstracts International, Volume: 84-12, Section: B.
500
$a
Advisor: Bradley, Ben.
502
$a
Thesis (Ph.D.)--Syracuse University, 2023.
504
$a
Includes bibliographical references
520
$a
I argue for responsibility internalism. That is, moral responsibility (i.e., accountability, or being apt for praise or blame) depends only on factors internal to agents. Employing this view, I also argue that no one is responsible for what AI does but this isn't morally problematic in a way that counts against developing or using AI.Responsibility is grounded in three potential conditions: the control (or freedom) condition, the epistemic (or awareness) condition, and the causal responsibility condition (or consequences). I argue that causal responsibility is irrelevant for moral responsibility, and that the control condition and the epistemic condition depend only on factors internal to agents. Moreover, since what AI does is at best a consequence of our actions, and the consequences of our actions are irrelevant to our responsibility, no one is responsible for what AI does. That is, the so-called responsibility gap exits. However, this isn't morally worrisome for developing or using AI. Firstly, I argue, current AI doesn't generate a new kind of concern about responsibility that the older technologies don't. Then, I argue that responsibility gap is not worrisome because neither responsibility gap, nor my argument for its existence, entails that no one can be justly punished, held accountable, or incurs duties in reparations when AI causes a harm.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2023
538
$a
Mode of access: World Wide Web
650
4
$a
Philosophy.
$3
516511
650
4
$a
Epistemology.
$3
896969
653
$a
Using AI
653
$a
Blameworthiness
653
$a
Degrees of causation
653
$a
Praiseworthiness
653
$a
Responsibility
653
$a
Self-driving cars
655
7
$a
Electronic books.
$2
lcsh
$3
542853
690
$a
0422
690
$a
0800
690
$a
0393
710
2
$a
ProQuest Information and Learning Co.
$3
783688
710
2
$a
Syracuse University.
$b
Philosophy.
$3
2101009
773
0
$t
Dissertations Abstracts International
$g
84-12B.
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=30423663
$z
click for full text (PQDT)
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9486021
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入