Language:
English
繁體中文
Help
回圖書館首頁
手機版館藏查詢
Login
Back
Switch To:
Labeled
|
MARC Mode
|
ISBD
Constrained Inference and Decoding f...
~
Meng, Tao.
Linked to FindBook
Google Book
Amazon
博客來
Constrained Inference and Decoding for Controlling Natural Language Processing Models.
Record Type:
Electronic resources : Monograph/item
Title/Author:
Constrained Inference and Decoding for Controlling Natural Language Processing Models./
Author:
Meng, Tao.
Published:
Ann Arbor : ProQuest Dissertations & Theses, : 2024,
Description:
137 p.
Notes:
Source: Dissertations Abstracts International, Volume: 85-12, Section: B.
Contained By:
Dissertations Abstracts International85-12B.
Subject:
Computer science. -
Online resource:
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=31331729
ISBN:
9798382836713
Constrained Inference and Decoding for Controlling Natural Language Processing Models.
Meng, Tao.
Constrained Inference and Decoding for Controlling Natural Language Processing Models.
- Ann Arbor : ProQuest Dissertations & Theses, 2024 - 137 p.
Source: Dissertations Abstracts International, Volume: 85-12, Section: B.
Thesis (Ph.D.)--University of California, Los Angeles, 2024.
With the rapid development of neural models in natural language processing (NLP), large and deep models achieve state-of-the-art across NLP tasks, and are deployed in real-world applications. Models become black-box to our human. Therefore, effective approaches controlling NLP models are demanding. Controlling helps model solve particular tasks. For example, when we ask the model to generate a recipe, we have a constraint about what ingredients we want the recipe to contain. In addition, as NLP researchers, we are responsible for preventing models from generating offensive or other unpredictable outputs, otherwise deploying them in real-world applications may cause society issues. To control the NLP models, my research focus on injecting constraints, a set of rules that the model must follow, to control the model behaviour via constrained inference and decoding. My research goal is to develop techniques leveraging different kinds of constraints in various scenarios for structure prediction models and large language models. Generally, constraints represent human knowledge and expectation to the model outputs, and constrained inference is the bridge between human beings and the neural models.
ISBN: 9798382836713Subjects--Topical Terms:
523869
Computer science.
Subjects--Index Terms:
Constrained decoding
Constrained Inference and Decoding for Controlling Natural Language Processing Models.
LDR
:02404nmm a2200385 4500
001
2398481
005
20240812064649.5
006
m o d
007
cr#unu||||||||
008
251215s2024 ||||||||||||||||| ||eng d
020
$a
9798382836713
035
$a
(MiAaPQ)AAI31331729
035
$a
AAI31331729
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Meng, Tao.
$3
3171744
245
1 0
$a
Constrained Inference and Decoding for Controlling Natural Language Processing Models.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2024
300
$a
137 p.
500
$a
Source: Dissertations Abstracts International, Volume: 85-12, Section: B.
500
$a
Advisor: Chang, Kai-Wei.
502
$a
Thesis (Ph.D.)--University of California, Los Angeles, 2024.
520
$a
With the rapid development of neural models in natural language processing (NLP), large and deep models achieve state-of-the-art across NLP tasks, and are deployed in real-world applications. Models become black-box to our human. Therefore, effective approaches controlling NLP models are demanding. Controlling helps model solve particular tasks. For example, when we ask the model to generate a recipe, we have a constraint about what ingredients we want the recipe to contain. In addition, as NLP researchers, we are responsible for preventing models from generating offensive or other unpredictable outputs, otherwise deploying them in real-world applications may cause society issues. To control the NLP models, my research focus on injecting constraints, a set of rules that the model must follow, to control the model behaviour via constrained inference and decoding. My research goal is to develop techniques leveraging different kinds of constraints in various scenarios for structure prediction models and large language models. Generally, constraints represent human knowledge and expectation to the model outputs, and constrained inference is the bridge between human beings and the neural models.
590
$a
School code: 0031.
650
4
$a
Computer science.
$3
523869
650
4
$a
Information technology.
$3
532993
653
$a
Constrained decoding
653
$a
Constrained inference
653
$a
Constraints
653
$a
Large language models
653
$a
Natural language processing
690
$a
0800
690
$a
0489
690
$a
0984
710
2
$a
University of California, Los Angeles.
$b
Computer Science 0201.
$3
2049859
773
0
$t
Dissertations Abstracts International
$g
85-12B.
790
$a
0031
791
$a
Ph.D.
792
$a
2024
793
$a
English
856
4 0
$u
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=31331729
based on 0 review(s)
Location:
ALL
電子資源
Year:
Volume Number:
Items
1 records • Pages 1 •
1
Inventory Number
Location Name
Item Class
Material type
Call number
Usage Class
Loan Status
No. of reservations
Opac note
Attachments
W9506801
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
On shelf
0
1 records • Pages 1 •
1
Multimedia
Reviews
Add a review
and share your thoughts with other readers
Export
pickup library
Processing
...
Change password
Login