Language:
English
繁體中文
Help
回圖書館首頁
手機版館藏查詢
Login
Back
Switch To:
Labeled
|
MARC Mode
|
ISBD
Enhancing LLM performance = efficacy...
~
Passban, Peyman.
Linked to FindBook
Google Book
Amazon
博客來
Enhancing LLM performance = efficacy, fine-tuning, and inference techniques /
Record Type:
Electronic resources : Monograph/item
Title/Author:
Enhancing LLM performance/ edited by Peyman Passban, Andy Way, Mehdi Rezagholizadeh.
Reminder of title:
efficacy, fine-tuning, and inference techniques /
other author:
Passban, Peyman.
Published:
Cham :Springer Nature Switzerland : : 2025.,
Description:
xvii, 183 p. :ill. (some col.), digital ;24 cm.
[NT 15003449]:
Introduction and Fundamentals -- SPEED: Speculative Pipelined Execution for Efficient Decoding -- Efficient LLM Inference on CPUs -- KronA: Parameter-Efficient Tuning with Kronecker Adapter -- LoDA: Low-Dimensional Adaptation of Large Language Models -- Sparse Fine-Tuning for Inference Acceleration of Large Language Models -- TCNCA: Temporal CNN with Chunked Attention for Efficient Training on Long Sequences -- Class-Based Feature Knowledge Distillation -- On the Use of Cross-Attentive Fusion Techniques for Audio-Visual Speaker Verification -- An Efficient Clustering Algorithm for Self-Supervised Speaker Recognition -- Remaining Issues for AI.
Contained By:
Springer Nature eBook
Subject:
Machine learning. -
Online resource:
https://doi.org/10.1007/978-3-031-85747-8
ISBN:
9783031857478
Enhancing LLM performance = efficacy, fine-tuning, and inference techniques /
Enhancing LLM performance
efficacy, fine-tuning, and inference techniques /[electronic resource] :edited by Peyman Passban, Andy Way, Mehdi Rezagholizadeh. - Cham :Springer Nature Switzerland :2025. - xvii, 183 p. :ill. (some col.), digital ;24 cm. - Machine translation: technologies and applications,v. 72522-803X ;. - Machine translation: technologies and applications ;volume 7..
Introduction and Fundamentals -- SPEED: Speculative Pipelined Execution for Efficient Decoding -- Efficient LLM Inference on CPUs -- KronA: Parameter-Efficient Tuning with Kronecker Adapter -- LoDA: Low-Dimensional Adaptation of Large Language Models -- Sparse Fine-Tuning for Inference Acceleration of Large Language Models -- TCNCA: Temporal CNN with Chunked Attention for Efficient Training on Long Sequences -- Class-Based Feature Knowledge Distillation -- On the Use of Cross-Attentive Fusion Techniques for Audio-Visual Speaker Verification -- An Efficient Clustering Algorithm for Self-Supervised Speaker Recognition -- Remaining Issues for AI.
This book is a pioneering exploration of the state-of-the-art techniques that drive large language models (LLMs) toward greater efficiency and scalability. Edited by three distinguished experts-Peyman Passban, Mehdi Rezagholizadeh, and Andy Way-this book presents practical solutions to the growing challenges of training and deploying these massive models. With their combined experience across academia, research, and industry, the authors provide insights into the tools and strategies required to improve LLM performance while reducing computational demands. This book is more than just a technical guide; it bridges the gap between research and real-world applications. Each chapter presents cutting-edge advancements in inference optimization, model architecture, and fine-tuning techniques, all designed to enhance the usability of LLMs in diverse sectors. Readers will find extensive discussions on the practical aspects of implementing and deploying LLMs in real-world scenarios. The book serves as a comprehensive resource for researchers and industry professionals, offering a balanced blend of in-depth technical insights and practical, hands-on guidance. It is a go-to reference book for students, researchers in computer science and relevant sub-branches, including machine learning, computational linguistics, and more.
ISBN: 9783031857478
Standard No.: 10.1007/978-3-031-85747-8doiSubjects--Topical Terms:
533906
Machine learning.
LC Class. No.: Q325.5
Dewey Class. No.: 006.31
Enhancing LLM performance = efficacy, fine-tuning, and inference techniques /
LDR
:03100nmm a2200337 a 4500
001
2412443
003
DE-He213
005
20250704131705.0
006
m d
007
cr nn 008maaau
008
260204s2025 sz s 0 eng d
020
$a
9783031857478
$q
(electronic bk.)
020
$a
9783031857461
$q
(paper)
024
7
$a
10.1007/978-3-031-85747-8
$2
doi
035
$a
978-3-031-85747-8
040
$a
GP
$c
GP
041
0
$a
eng
050
4
$a
Q325.5
072
7
$a
UYQM
$2
bicssc
072
7
$a
MAT029000
$2
bisacsh
072
7
$a
UYQM
$2
thema
082
0 4
$a
006.31
$2
23
090
$a
Q325.5
$b
.E58 2025
245
0 0
$a
Enhancing LLM performance
$h
[electronic resource] :
$b
efficacy, fine-tuning, and inference techniques /
$c
edited by Peyman Passban, Andy Way, Mehdi Rezagholizadeh.
260
$a
Cham :
$b
Springer Nature Switzerland :
$b
Imprint: Springer,
$c
2025.
300
$a
xvii, 183 p. :
$b
ill. (some col.), digital ;
$c
24 cm.
490
1
$a
Machine translation: technologies and applications,
$x
2522-803X ;
$v
v. 7
505
0
$a
Introduction and Fundamentals -- SPEED: Speculative Pipelined Execution for Efficient Decoding -- Efficient LLM Inference on CPUs -- KronA: Parameter-Efficient Tuning with Kronecker Adapter -- LoDA: Low-Dimensional Adaptation of Large Language Models -- Sparse Fine-Tuning for Inference Acceleration of Large Language Models -- TCNCA: Temporal CNN with Chunked Attention for Efficient Training on Long Sequences -- Class-Based Feature Knowledge Distillation -- On the Use of Cross-Attentive Fusion Techniques for Audio-Visual Speaker Verification -- An Efficient Clustering Algorithm for Self-Supervised Speaker Recognition -- Remaining Issues for AI.
520
$a
This book is a pioneering exploration of the state-of-the-art techniques that drive large language models (LLMs) toward greater efficiency and scalability. Edited by three distinguished experts-Peyman Passban, Mehdi Rezagholizadeh, and Andy Way-this book presents practical solutions to the growing challenges of training and deploying these massive models. With their combined experience across academia, research, and industry, the authors provide insights into the tools and strategies required to improve LLM performance while reducing computational demands. This book is more than just a technical guide; it bridges the gap between research and real-world applications. Each chapter presents cutting-edge advancements in inference optimization, model architecture, and fine-tuning techniques, all designed to enhance the usability of LLMs in diverse sectors. Readers will find extensive discussions on the practical aspects of implementing and deploying LLMs in real-world scenarios. The book serves as a comprehensive resource for researchers and industry professionals, offering a balanced blend of in-depth technical insights and practical, hands-on guidance. It is a go-to reference book for students, researchers in computer science and relevant sub-branches, including machine learning, computational linguistics, and more.
650
0
$a
Machine learning.
$3
533906
650
0
$a
Natural language processing (Computer science)
$3
565309
650
1 4
$a
Machine Learning.
$3
3382522
650
2 4
$a
Natural Language Processing (NLP).
$3
3755514
700
1
$a
Passban, Peyman.
$3
3787714
700
1
$a
Way, Andy.
$3
3529385
700
1
$a
Rezagholizadeh, Mehdi.
$3
3787715
710
2
$a
SpringerLink (Online service)
$3
836513
773
0
$t
Springer Nature eBook
830
0
$a
Machine translation: technologies and applications ;
$v
volume 7.
$3
3787716
856
4 0
$u
https://doi.org/10.1007/978-3-031-85747-8
950
$a
Education (SpringerNature-41171)
based on 0 review(s)
Location:
ALL
電子資源
Year:
Volume Number:
Items
1 records • Pages 1 •
1
Inventory Number
Location Name
Item Class
Material type
Call number
Usage Class
Loan Status
No. of reservations
Opac note
Attachments
W9517941
電子資源
11.線上閱覽_V
電子書
EB Q325.5
一般使用(Normal)
On shelf
0
1 records • Pages 1 •
1
Multimedia
Reviews
Add a review
and share your thoughts with other readers
Export
pickup library
Processing
...
Change password
Login