語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
到查詢結果
[ null ]
切換:
標籤
|
MARC模式
|
ISBD
Efficient training and feature induc...
~
Hao, Guohua.
FindBook
Google Book
Amazon
博客來
Efficient training and feature induction in sequential supervised learning.
紀錄類型:
書目-語言資料,印刷品 : Monograph/item
正題名/作者:
Efficient training and feature induction in sequential supervised learning./
作者:
Hao, Guohua.
面頁冊數:
101 p.
附註:
Source: Dissertation Abstracts International, Volume: 70-11, Section: B, page: 6992.
Contained By:
Dissertation Abstracts International70-11B.
標題:
Artificial Intelligence. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3380855
ISBN:
9781109452761
Efficient training and feature induction in sequential supervised learning.
Hao, Guohua.
Efficient training and feature induction in sequential supervised learning.
- 101 p.
Source: Dissertation Abstracts International, Volume: 70-11, Section: B, page: 6992.
Thesis (Ph.D.)--Oregon State University, 2009.
Sequential supervised learning problems arise in many real applications. This dissertation focuses on two important research directions in sequential supervised learning: efficient training and feature induction.
ISBN: 9781109452761Subjects--Topical Terms:
769149
Artificial Intelligence.
Efficient training and feature induction in sequential supervised learning.
LDR
:03661nam 2200277 4500
001
1401581
005
20111017084356.5
008
130515s2009 ||||||||||||||||| ||eng d
020
$a
9781109452761
035
$a
(UMI)AAI3380855
035
$a
AAI3380855
040
$a
UMI
$c
UMI
100
1
$a
Hao, Guohua.
$3
1680727
245
1 0
$a
Efficient training and feature induction in sequential supervised learning.
300
$a
101 p.
500
$a
Source: Dissertation Abstracts International, Volume: 70-11, Section: B, page: 6992.
502
$a
Thesis (Ph.D.)--Oregon State University, 2009.
520
$a
Sequential supervised learning problems arise in many real applications. This dissertation focuses on two important research directions in sequential supervised learning: efficient training and feature induction.
520
$a
In the direction of efficient training, we study the training of conditional random fields (CRFs), which provide a flexible and powerful model for sequential supervised learning problems. Existing training algorithms for CRFs are slow, particularly in problems with large numbers of potential input features and feature combinations. In this dissertation, we describe a new algorithm, TREECRF, for training CRFs via gradient tree boosting. In TREECRF, the CRF potential functions are represented as weighted sums of regression trees, which provide compact representations of feature interactions. So the algorithm does not explicitly consider the potentially large parameter space. As a result, gradient tree boosting scales linearly in the order of the Markov model and in the order of the feature interactions, rather than exponentially as in previous algorithms based on iterative scaling and gradient descent. Detailed experimental results are provided to evaluate the performance of the TREECRF algorithm and possible extensions of this algorithm are discussed.
520
$a
We also study the problem of handling missing input values in CRFs, which has been rarely discussed in the literature. Gradient tree boosting also makes it possible to use instance weighting (as in C4.5) and surrogate splitting (as in CART) to handle missing values in CRFs. Experimental studies of the effectiveness of these two methods (as well as standard imputation and indicator feature methods) show that instance weighting is the best method in most cases when feature values are missing at random. In the direction of feature induction, we study the search-based structured learning framework and its application to sequential supervised learning problems. By formulating the label sequence prediction process as an incremental search process from one end of a sequence to the other, this framework is able to avoid complicated inference algorithms in the training process and thus achieves very fast training speed. However, for problems where there exist long range dependencies between the current position and future positions, at each search step, this framework is unable to exploit these dependencies to make accurate predictions. In this dissertation, a multiple-instance learning based algorithm is proposed to automatically extract useful features from future positions as a way to discover and exploit these long range dependencies. Integrating this algorithm with maximum entropy Markov models yields promising experimental results on both synthetic data sets and real data sets that have long range dependencies in sequences.
590
$a
School code: 0172.
650
4
$a
Artificial Intelligence.
$3
769149
650
4
$a
Computer Science.
$3
626642
690
$a
0800
690
$a
0984
710
2
$a
Oregon State University.
$3
625720
773
0
$t
Dissertation Abstracts International
$g
70-11B.
790
$a
0172
791
$a
Ph.D.
792
$a
2009
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3380855
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9164720
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入
(1)帳號:一般為「身分證號」;外籍生或交換生則為「學號」。 (2)密碼:預設為帳號末四碼。
帳號
.
密碼
.
請在此電腦上記得個人資料
取消
忘記密碼? (請注意!您必須已在系統登記E-mail信箱方能使用。)