語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Preferences: Optimization, importanc...
~
Zhu, Ying.
FindBook
Google Book
Amazon
博客來
Preferences: Optimization, importance learning and strategic behaviors.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Preferences: Optimization, importance learning and strategic behaviors./
作者:
Zhu, Ying.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2016,
面頁冊數:
160 p.
附註:
Source: Dissertations Abstracts International, Volume: 78-03, Section: B.
Contained By:
Dissertations Abstracts International78-03B.
標題:
Computer science. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10153856
ISBN:
9781369089776
Preferences: Optimization, importance learning and strategic behaviors.
Zhu, Ying.
Preferences: Optimization, importance learning and strategic behaviors.
- Ann Arbor : ProQuest Dissertations & Theses, 2016 - 160 p.
Source: Dissertations Abstracts International, Volume: 78-03, Section: B.
Thesis (Ph.D.)--University of Kentucky, 2016.
This item must not be added to any third party search indexes.
Preferences are fundamental to decision making and play an important role in artificial intelligence. Our research focuses on three group of problems based on the preference formalism Answer Set Optimization (ASO) [27]: preference aggregation problems such as computing optimal (near optimal) solutions, strategic behaviors in preference representation, and learning ranks (weights) for preferences. In the first group of problems, of interest are optimal outcomes, that is, outcomes that are optimal with respect to the preorder defined by the preference rules. In this work, we consider computational problems concerning optimal outcomes. We propose, implement and study methods to compute an optimal outcome; to compute another optimal outcome once the first one is found; to compute an optimal outcome that is similar to (or, dissimilar from) a given candidate outcome; and to compute a set of optimal answer sets each significantly different from the others. For the decision version of several of these problems we establish their computational complexity. For the second topic, the strategic behaviors such as manipulation and bribery have received much attention from the social choice community. We study these concepts for preference formalisms that identify a set of optimal outcomes rather than a single winning outcome, the case common to social choice. Such preference formalisms are of interest in the context of combinatorial domains, where preference representations are only approximations to true preferences, and seeking a single optimal outcome runs a risk of missing the one which is optimal with respect to the actual preferences. In this work, we assume that preferences may be ranked (differ in importance), and we use the Pareto principle adjusted to the case of ranked preferences as the preference aggregation rule. For two important classes of preferences, representing the extreme ends of the spectrum, we provide characterizations of situations when manipulation and bribery is possible, and establish the complexity of the problem to decide that. Finally, we study the problem of learning the importance of individual preferences in preference profiles aggregated by the ranked Pareto rule or positional scoring rules. We provide a polynomial-time algorithm that finds a ranking of preferences such that the ranked profile correctly decided all the examples, whenever such a ranking exists. We also show that the problem to learn a ranking maximizing the number of correctly decided examples is NP-hard. We obtain similar results for the case of weighted profiles.
ISBN: 9781369089776Subjects--Topical Terms:
523869
Computer science.
Subjects--Index Terms:
Answer set optimization
Preferences: Optimization, importance learning and strategic behaviors.
LDR
:03851nmm a2200361 4500
001
2266654
005
20200624083543.5
008
220629s2016 ||||||||||||||||| ||eng d
020
$a
9781369089776
035
$a
(MiAaPQ)AAI10153856
035
$a
AAI10153856
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Zhu, Ying.
$3
1261116
245
1 0
$a
Preferences: Optimization, importance learning and strategic behaviors.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2016
300
$a
160 p.
500
$a
Source: Dissertations Abstracts International, Volume: 78-03, Section: B.
500
$a
Publisher info.: Dissertation/Thesis.
500
$a
Advisor: Truszczynski, Miroslaw.
502
$a
Thesis (Ph.D.)--University of Kentucky, 2016.
506
$a
This item must not be added to any third party search indexes.
506
$a
This item must not be sold to any third party vendors.
520
$a
Preferences are fundamental to decision making and play an important role in artificial intelligence. Our research focuses on three group of problems based on the preference formalism Answer Set Optimization (ASO) [27]: preference aggregation problems such as computing optimal (near optimal) solutions, strategic behaviors in preference representation, and learning ranks (weights) for preferences. In the first group of problems, of interest are optimal outcomes, that is, outcomes that are optimal with respect to the preorder defined by the preference rules. In this work, we consider computational problems concerning optimal outcomes. We propose, implement and study methods to compute an optimal outcome; to compute another optimal outcome once the first one is found; to compute an optimal outcome that is similar to (or, dissimilar from) a given candidate outcome; and to compute a set of optimal answer sets each significantly different from the others. For the decision version of several of these problems we establish their computational complexity. For the second topic, the strategic behaviors such as manipulation and bribery have received much attention from the social choice community. We study these concepts for preference formalisms that identify a set of optimal outcomes rather than a single winning outcome, the case common to social choice. Such preference formalisms are of interest in the context of combinatorial domains, where preference representations are only approximations to true preferences, and seeking a single optimal outcome runs a risk of missing the one which is optimal with respect to the actual preferences. In this work, we assume that preferences may be ranked (differ in importance), and we use the Pareto principle adjusted to the case of ranked preferences as the preference aggregation rule. For two important classes of preferences, representing the extreme ends of the spectrum, we provide characterizations of situations when manipulation and bribery is possible, and establish the complexity of the problem to decide that. Finally, we study the problem of learning the importance of individual preferences in preference profiles aggregated by the ranked Pareto rule or positional scoring rules. We provide a polynomial-time algorithm that finds a ranking of preferences such that the ranked profile correctly decided all the examples, whenever such a ranking exists. We also show that the problem to learn a ranking maximizing the number of correctly decided examples is NP-hard. We obtain similar results for the case of weighted profiles.
590
$a
School code: 0102.
650
4
$a
Computer science.
$3
523869
653
$a
Answer set optimization
653
$a
Artificial intelligence
653
$a
Manipulation and bribery
653
$a
Preference reasoning
690
$a
0984
710
2
$a
University of Kentucky.
$b
Computer Science.
$3
3181539
773
0
$t
Dissertations Abstracts International
$g
78-03B.
790
$a
0102
791
$a
Ph.D.
792
$a
2016
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10153856
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9418888
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入