Language:
English
繁體中文
Help
回圖書館首頁
手機版館藏查詢
Login
Back
Switch To:
Labeled
|
MARC Mode
|
ISBD
Efficient training and feature induc...
~
Hao, Guohua.
Linked to FindBook
Google Book
Amazon
博客來
Efficient training and feature induction in sequential supervised learning.
Record Type:
Language materials, printed : Monograph/item
Title/Author:
Efficient training and feature induction in sequential supervised learning./
Author:
Hao, Guohua.
Description:
101 p.
Notes:
Source: Dissertation Abstracts International, Volume: 70-11, Section: B, page: 6992.
Contained By:
Dissertation Abstracts International70-11B.
Subject:
Artificial Intelligence. -
Online resource:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3380855
ISBN:
9781109452761
Efficient training and feature induction in sequential supervised learning.
Hao, Guohua.
Efficient training and feature induction in sequential supervised learning.
- 101 p.
Source: Dissertation Abstracts International, Volume: 70-11, Section: B, page: 6992.
Thesis (Ph.D.)--Oregon State University, 2009.
Sequential supervised learning problems arise in many real applications. This dissertation focuses on two important research directions in sequential supervised learning: efficient training and feature induction.
ISBN: 9781109452761Subjects--Topical Terms:
769149
Artificial Intelligence.
Efficient training and feature induction in sequential supervised learning.
LDR
:03661nam 2200277 4500
001
1401581
005
20111017084356.5
008
130515s2009 ||||||||||||||||| ||eng d
020
$a
9781109452761
035
$a
(UMI)AAI3380855
035
$a
AAI3380855
040
$a
UMI
$c
UMI
100
1
$a
Hao, Guohua.
$3
1680727
245
1 0
$a
Efficient training and feature induction in sequential supervised learning.
300
$a
101 p.
500
$a
Source: Dissertation Abstracts International, Volume: 70-11, Section: B, page: 6992.
502
$a
Thesis (Ph.D.)--Oregon State University, 2009.
520
$a
Sequential supervised learning problems arise in many real applications. This dissertation focuses on two important research directions in sequential supervised learning: efficient training and feature induction.
520
$a
In the direction of efficient training, we study the training of conditional random fields (CRFs), which provide a flexible and powerful model for sequential supervised learning problems. Existing training algorithms for CRFs are slow, particularly in problems with large numbers of potential input features and feature combinations. In this dissertation, we describe a new algorithm, TREECRF, for training CRFs via gradient tree boosting. In TREECRF, the CRF potential functions are represented as weighted sums of regression trees, which provide compact representations of feature interactions. So the algorithm does not explicitly consider the potentially large parameter space. As a result, gradient tree boosting scales linearly in the order of the Markov model and in the order of the feature interactions, rather than exponentially as in previous algorithms based on iterative scaling and gradient descent. Detailed experimental results are provided to evaluate the performance of the TREECRF algorithm and possible extensions of this algorithm are discussed.
520
$a
We also study the problem of handling missing input values in CRFs, which has been rarely discussed in the literature. Gradient tree boosting also makes it possible to use instance weighting (as in C4.5) and surrogate splitting (as in CART) to handle missing values in CRFs. Experimental studies of the effectiveness of these two methods (as well as standard imputation and indicator feature methods) show that instance weighting is the best method in most cases when feature values are missing at random. In the direction of feature induction, we study the search-based structured learning framework and its application to sequential supervised learning problems. By formulating the label sequence prediction process as an incremental search process from one end of a sequence to the other, this framework is able to avoid complicated inference algorithms in the training process and thus achieves very fast training speed. However, for problems where there exist long range dependencies between the current position and future positions, at each search step, this framework is unable to exploit these dependencies to make accurate predictions. In this dissertation, a multiple-instance learning based algorithm is proposed to automatically extract useful features from future positions as a way to discover and exploit these long range dependencies. Integrating this algorithm with maximum entropy Markov models yields promising experimental results on both synthetic data sets and real data sets that have long range dependencies in sequences.
590
$a
School code: 0172.
650
4
$a
Artificial Intelligence.
$3
769149
650
4
$a
Computer Science.
$3
626642
690
$a
0800
690
$a
0984
710
2
$a
Oregon State University.
$3
625720
773
0
$t
Dissertation Abstracts International
$g
70-11B.
790
$a
0172
791
$a
Ph.D.
792
$a
2009
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3380855
based on 0 review(s)
Location:
ALL
電子資源
Year:
Volume Number:
Items
1 records • Pages 1 •
1
Inventory Number
Location Name
Item Class
Material type
Call number
Usage Class
Loan Status
No. of reservations
Opac note
Attachments
W9164720
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
On shelf
0
1 records • Pages 1 •
1
Multimedia
Reviews
Add a review
and share your thoughts with other readers
Export
pickup library
Processing
...
Change password
Login