Language:
English
繁體中文
Help
回圖書館首頁
手機版館藏查詢
Login
Back
Switch To:
Labeled
|
MARC Mode
|
ISBD
Embodied language learning in humans...
~
Yu, Chen.
Linked to FindBook
Google Book
Amazon
博客來
Embodied language learning in humans and machines.
Record Type:
Electronic resources : Monograph/item
Title/Author:
Embodied language learning in humans and machines./
Author:
Yu, Chen.
Description:
156 p.
Notes:
Source: Dissertation Abstracts International, Volume: 65-08, Section: B, page: 4124.
Contained By:
Dissertation Abstracts International65-08B.
Subject:
Computer Science. -
Online resource:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3142332
ISBN:
0496893130
Embodied language learning in humans and machines.
Yu, Chen.
Embodied language learning in humans and machines.
- 156 p.
Source: Dissertation Abstracts International, Volume: 65-08, Section: B, page: 4124.
Thesis (Ph.D.)--University of Rochester, 2004.
This thesis addresses questions of embodiment in language learning: how language is grounded in sensorimotor experience and how language development depends on complex interactions among brain, body and environment. Most studies of human language acquisition have focused on purely linguistic input. We believe, however, that non-linguistic information, such as visual attention and body movement, also serves as an important driving force in language learning. This work presents a formal model that explores the computational role of non-linguistic information through both empirical and computational studies with the hope to get a more complete picture. We first introduce a statistical learning mechanism that provides a formal account of cross-situational observation. Then a unified model is proposed which is able to make use of different kinds of social cues, such as joint attention and prosody in speech, in the statistical learning framework. In the next experiment, we use adult subjects exposed to a second language to study the role of non-linguistic information in word learning. The results show conclusively that eye gaze is a big help in both speech segmentation and word-meaning association.
ISBN: 0496893130Subjects--Topical Terms:
626642
Computer Science.
Embodied language learning in humans and machines.
LDR
:03209nmm 2200313 4500
001
1846761
005
20051103093544.5
008
130614s2004 eng d
020
$a
0496893130
035
$a
(UnM)AAI3142332
035
$a
AAI3142332
040
$a
UnM
$c
UnM
100
1
$a
Yu, Chen.
$3
1934859
245
1 0
$a
Embodied language learning in humans and machines.
300
$a
156 p.
500
$a
Source: Dissertation Abstracts International, Volume: 65-08, Section: B, page: 4124.
500
$a
Supervisor: Dana H. Ballard.
502
$a
Thesis (Ph.D.)--University of Rochester, 2004.
520
$a
This thesis addresses questions of embodiment in language learning: how language is grounded in sensorimotor experience and how language development depends on complex interactions among brain, body and environment. Most studies of human language acquisition have focused on purely linguistic input. We believe, however, that non-linguistic information, such as visual attention and body movement, also serves as an important driving force in language learning. This work presents a formal model that explores the computational role of non-linguistic information through both empirical and computational studies with the hope to get a more complete picture. We first introduce a statistical learning mechanism that provides a formal account of cross-situational observation. Then a unified model is proposed which is able to make use of different kinds of social cues, such as joint attention and prosody in speech, in the statistical learning framework. In the next experiment, we use adult subjects exposed to a second language to study the role of non-linguistic information in word learning. The results show conclusively that eye gaze is a big help in both speech segmentation and word-meaning association.
520
$a
In light of the findings of human language acquisition, we develop a multimodal embodied system that learns words from natural interactions with users. The learning system is trained in an unsupervised mode in which users perform everyday tasks while providing natural language descriptions of their behaviors. The system collects acoustic signals in concert with user-centric multisensory information from non-speech modalities, such as user's perspective video, gaze positions, head directions and hand movements. A multimodal learning algorithm uses this data to first spot words from continuous speech and then associate action verbs and object names with their perceptually grounded meanings. The central ideas are to make use of non-speech contextual information to facilitate word spotting, and utilize body movements as deictic references to associate temporally co-occurring data from different modalities and build lexical items. This advent represents the first steps of an ongoing progression toward computational systems capable of human-like sensory perception.
590
$a
School code: 0188.
650
4
$a
Computer Science.
$3
626642
650
4
$a
Psychology, Cognitive.
$3
1017810
650
4
$a
Psychology, Developmental.
$3
1017557
650
4
$a
Artificial Intelligence.
$3
769149
690
$a
0984
690
$a
0633
690
$a
0620
690
$a
0800
710
2 0
$a
University of Rochester.
$3
515736
773
0
$t
Dissertation Abstracts International
$g
65-08B.
790
1 0
$a
Ballard, Dana H.,
$e
advisor
790
$a
0188
791
$a
Ph.D.
792
$a
2004
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3142332
based on 0 review(s)
Location:
ALL
電子資源
Year:
Volume Number:
Items
1 records • Pages 1 •
1
Inventory Number
Location Name
Item Class
Material type
Call number
Usage Class
Loan Status
No. of reservations
Opac note
Attachments
W9196275
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
On shelf
0
1 records • Pages 1 •
1
Multimedia
Reviews
Add a review
and share your thoughts with other readers
Export
pickup library
Processing
...
Change password
Login