Language:
English
繁體中文
Help
回圖書館首頁
手機版館藏查詢
Login
Back
Switch To:
Labeled
|
MARC Mode
|
ISBD
Linked to FindBook
Google Book
Amazon
博客來
Responsibility Internalism and Responsibility for AI.
Record Type:
Electronic resources : Monograph/item
Title/Author:
Responsibility Internalism and Responsibility for AI./
Author:
Demirtas, Huzeyfe.
Description:
1 online resource (146 pages)
Notes:
Source: Dissertations Abstracts International, Volume: 84-12, Section: B.
Contained By:
Dissertations Abstracts International84-12B.
Subject:
Philosophy. -
Online resource:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=30423663click for full text (PQDT)
ISBN:
9798379704070
Responsibility Internalism and Responsibility for AI.
Demirtas, Huzeyfe.
Responsibility Internalism and Responsibility for AI.
- 1 online resource (146 pages)
Source: Dissertations Abstracts International, Volume: 84-12, Section: B.
Thesis (Ph.D.)--Syracuse University, 2023.
Includes bibliographical references
I argue for responsibility internalism. That is, moral responsibility (i.e., accountability, or being apt for praise or blame) depends only on factors internal to agents. Employing this view, I also argue that no one is responsible for what AI does but this isn't morally problematic in a way that counts against developing or using AI.Responsibility is grounded in three potential conditions: the control (or freedom) condition, the epistemic (or awareness) condition, and the causal responsibility condition (or consequences). I argue that causal responsibility is irrelevant for moral responsibility, and that the control condition and the epistemic condition depend only on factors internal to agents. Moreover, since what AI does is at best a consequence of our actions, and the consequences of our actions are irrelevant to our responsibility, no one is responsible for what AI does. That is, the so-called responsibility gap exits. However, this isn't morally worrisome for developing or using AI. Firstly, I argue, current AI doesn't generate a new kind of concern about responsibility that the older technologies don't. Then, I argue that responsibility gap is not worrisome because neither responsibility gap, nor my argument for its existence, entails that no one can be justly punished, held accountable, or incurs duties in reparations when AI causes a harm.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2023
Mode of access: World Wide Web
ISBN: 9798379704070Subjects--Topical Terms:
516511
Philosophy.
Subjects--Index Terms:
Using AIIndex Terms--Genre/Form:
542853
Electronic books.
Responsibility Internalism and Responsibility for AI.
LDR
:02744nmm a2200409K 4500
001
2363665
005
20231127093614.5
006
m o d
007
cr mn ---uuuuu
008
241011s2023 xx obm 000 0 eng d
020
$a
9798379704070
035
$a
(MiAaPQ)AAI30423663
035
$a
AAI30423663
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
$d
NTU
100
1
$a
Demirtas, Huzeyfe.
$3
3704441
245
1 0
$a
Responsibility Internalism and Responsibility for AI.
264
0
$c
2023
300
$a
1 online resource (146 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Dissertations Abstracts International, Volume: 84-12, Section: B.
500
$a
Advisor: Bradley, Ben.
502
$a
Thesis (Ph.D.)--Syracuse University, 2023.
504
$a
Includes bibliographical references
520
$a
I argue for responsibility internalism. That is, moral responsibility (i.e., accountability, or being apt for praise or blame) depends only on factors internal to agents. Employing this view, I also argue that no one is responsible for what AI does but this isn't morally problematic in a way that counts against developing or using AI.Responsibility is grounded in three potential conditions: the control (or freedom) condition, the epistemic (or awareness) condition, and the causal responsibility condition (or consequences). I argue that causal responsibility is irrelevant for moral responsibility, and that the control condition and the epistemic condition depend only on factors internal to agents. Moreover, since what AI does is at best a consequence of our actions, and the consequences of our actions are irrelevant to our responsibility, no one is responsible for what AI does. That is, the so-called responsibility gap exits. However, this isn't morally worrisome for developing or using AI. Firstly, I argue, current AI doesn't generate a new kind of concern about responsibility that the older technologies don't. Then, I argue that responsibility gap is not worrisome because neither responsibility gap, nor my argument for its existence, entails that no one can be justly punished, held accountable, or incurs duties in reparations when AI causes a harm.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2023
538
$a
Mode of access: World Wide Web
650
4
$a
Philosophy.
$3
516511
650
4
$a
Epistemology.
$3
896969
653
$a
Using AI
653
$a
Blameworthiness
653
$a
Degrees of causation
653
$a
Praiseworthiness
653
$a
Responsibility
653
$a
Self-driving cars
655
7
$a
Electronic books.
$2
lcsh
$3
542853
690
$a
0422
690
$a
0800
690
$a
0393
710
2
$a
ProQuest Information and Learning Co.
$3
783688
710
2
$a
Syracuse University.
$b
Philosophy.
$3
2101009
773
0
$t
Dissertations Abstracts International
$g
84-12B.
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=30423663
$z
click for full text (PQDT)
based on 0 review(s)
Location:
ALL
電子資源
Year:
Volume Number:
Items
1 records • Pages 1 •
1
Inventory Number
Location Name
Item Class
Material type
Call number
Usage Class
Loan Status
No. of reservations
Opac note
Attachments
W9486021
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
On shelf
0
1 records • Pages 1 •
1
Multimedia
Reviews
Add a review
and share your thoughts with other readers
Export
pickup library
Processing
...
Change password
Login