語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Privacy-Fairness: It's Complicated: ...
~
Jang, Eunbee.
FindBook
Google Book
Amazon
博客來
Privacy-Fairness: It's Complicated: An Unavoidable Unfairness of Differentially Privacy on Machine Learning Systems.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Privacy-Fairness: It's Complicated: An Unavoidable Unfairness of Differentially Privacy on Machine Learning Systems./
作者:
Jang, Eunbee.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2022,
面頁冊數:
76 p.
附註:
Source: Masters Abstracts International, Volume: 84-05.
Contained By:
Masters Abstracts International84-05.
標題:
Personal information. -
電子資源:
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=30157867
ISBN:
9798352991749
Privacy-Fairness: It's Complicated: An Unavoidable Unfairness of Differentially Privacy on Machine Learning Systems.
Jang, Eunbee.
Privacy-Fairness: It's Complicated: An Unavoidable Unfairness of Differentially Privacy on Machine Learning Systems.
- Ann Arbor : ProQuest Dissertations & Theses, 2022 - 76 p.
Source: Masters Abstracts International, Volume: 84-05.
Thesis (M.Sc.)--McGill University (Canada), 2022.
This item must not be sold to any third party vendors.
Any machine learning software driven by data involving sensitive information should be carefully designed so that primary concerns related to human values and rights are sufficiently protected. Privacy and fairness are two of the most prominent concerns in machine learning attracting academia and industry. However, they are often addressed independently, although they both heavily rely on data distribution around sensitive personal attributes. Such a split approach cannot adequately attend to the reality in that the assessments made alongside either privacy or fairness could complicate the decision of the other. As a starting point, we present an empirical study to characterize differential privacy's utility and fairness impact on inferential systems. In particular, we investigate whether various differential privacy techniques applied to different parts of the machine learning pipeline pose any risks to group fairness. Our analysis reveals the convoluted picture of the interplay between privacy and fairness, particularly on the sensitivity of commonly used fairness metrics and shows that unfair outcomes are sometimes an unavoidable side effect of differential privacy. We further discuss essential factors to consider when designing a fairness evaluation on differentially private systems.
ISBN: 9798352991749Subjects--Topical Terms:
3562412
Personal information.
Privacy-Fairness: It's Complicated: An Unavoidable Unfairness of Differentially Privacy on Machine Learning Systems.
LDR
:04184nmm a2200409 4500
001
2394436
005
20240422070856.5
006
m o d
007
cr#unu||||||||
008
251215s2022 ||||||||||||||||| ||eng d
020
$a
9798352991749
035
$a
(MiAaPQ)AAI30157867
035
$a
(MiAaPQ)McGill_cv43p3239
035
$a
AAI30157867
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Jang, Eunbee.
$3
3763910
245
1 0
$a
Privacy-Fairness: It's Complicated: An Unavoidable Unfairness of Differentially Privacy on Machine Learning Systems.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2022
300
$a
76 p.
500
$a
Source: Masters Abstracts International, Volume: 84-05.
500
$a
Advisor: Moon, AJung;Guo, Jin L. C. .
502
$a
Thesis (M.Sc.)--McGill University (Canada), 2022.
506
$a
This item must not be sold to any third party vendors.
520
$a
Any machine learning software driven by data involving sensitive information should be carefully designed so that primary concerns related to human values and rights are sufficiently protected. Privacy and fairness are two of the most prominent concerns in machine learning attracting academia and industry. However, they are often addressed independently, although they both heavily rely on data distribution around sensitive personal attributes. Such a split approach cannot adequately attend to the reality in that the assessments made alongside either privacy or fairness could complicate the decision of the other. As a starting point, we present an empirical study to characterize differential privacy's utility and fairness impact on inferential systems. In particular, we investigate whether various differential privacy techniques applied to different parts of the machine learning pipeline pose any risks to group fairness. Our analysis reveals the convoluted picture of the interplay between privacy and fairness, particularly on the sensitivity of commonly used fairness metrics and shows that unfair outcomes are sometimes an unavoidable side effect of differential privacy. We further discuss essential factors to consider when designing a fairness evaluation on differentially private systems.
520
$a
Tout logiciel d'apprentissage automatique pilote par des donnees impliquant des informations sensibles doit etre concu avec soin afin que les principales preoccupations liees aux valeurs et aux droits de l'homme soient suffisamment protegees. Le respect de la vie privee et l'equite sont deux des preoccupations les plus importantes de l'apprentissage automatique qui attirent les universitaires et les industriels. Cependant, elles sont souvent abordees independamment, bien qu'elles dependent toutes deux fortement de la distribution des donnees autour d'attributs per- sonnels sensibles. Une telle approche separee ne peut pas repondre adequatement a la realite dans la mesure ou les evaluations faites en meme temps que la confidentialite ou l'equite pourraient compliquer la decision de l'autre. Comme point de depart, nous presentons une etude empirique pour caracteriser l'impact de la confidentialite differentielle sur l'utilite et l'equite des systemes inferentiels. En particulier, nous examinons si diverses techniques de confidentialite differentielle appliquees a differentes parties du pipeline d'apprentissage automatique presentent des risques pour l'equite du groupe. Notre analyse revele l'image alambiquee de l'interaction entre la confidentialite et l'equite, en particulier sur la sensibilite des mesures d'equite couramment utilisees et montre que les resultats injustes sont parfois un effet secondaire inevitable de la confidentialite differentielle. Nous discutons egalement des facteurs essentiels a prendre en compte lors de la conception d'une evaluation de l'equite sur des systemes a confidentialite differentielle.
590
$a
School code: 0781.
650
4
$a
Personal information.
$3
3562412
650
4
$a
Decision making.
$3
517204
650
4
$a
Gender.
$3
2001319
650
4
$a
Design.
$3
518875
650
4
$a
Algorithms.
$3
536374
650
4
$a
Privacy.
$3
528582
650
4
$a
Teachers.
$3
529573
650
4
$a
Education.
$3
516579
650
4
$a
Computer science.
$3
523869
650
4
$a
Finance.
$3
542899
690
$a
0770
690
$a
0389
690
$a
0515
690
$a
0800
690
$a
0984
690
$a
0508
690
$a
0338
710
2
$a
McGill University (Canada).
$3
1018122
773
0
$t
Masters Abstracts International
$g
84-05.
790
$a
0781
791
$a
M.Sc.
792
$a
2022
793
$a
English
856
4 0
$u
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=30157867
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9502756
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入