語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Essays in Large Scale Optimization A...
~
Zhu, Mingxi.
FindBook
Google Book
Amazon
博客來
Essays in Large Scale Optimization Algorithm and Its Application in Revenue Management.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Essays in Large Scale Optimization Algorithm and Its Application in Revenue Management./
作者:
Zhu, Mingxi.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2023,
面頁冊數:
227 p.
附註:
Source: Dissertations Abstracts International, Volume: 85-11, Section: A.
Contained By:
Dissertations Abstracts International85-11A.
標題:
Partial differential equations. -
電子資源:
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=31017892
ISBN:
9798382625942
Essays in Large Scale Optimization Algorithm and Its Application in Revenue Management.
Zhu, Mingxi.
Essays in Large Scale Optimization Algorithm and Its Application in Revenue Management.
- Ann Arbor : ProQuest Dissertations & Theses, 2023 - 227 p.
Source: Dissertations Abstracts International, Volume: 85-11, Section: A.
Thesis (Ph.D.)--Stanford University, 2023.
This dissertation focuses on the large-scale optimization algorithm and its application in revenue management. It comprises three chapters. Chapter 1, Managing Randomization in the Multi-Block Alternating Direction Method of Multipliers for Quadratic Optimization, provides theoretical foundations for managing randomization in the multi-block alternating direction method of multipliers (ADMM) method for quadratic optimization. Chapter 2, How a Small Amount of Data Sharing Benefits Distributed Optimization and Learning, presents both the theoretical and practical evidences on sharing a small amount of data could hugely benefit distributed optimization and learning. Chapter 3, Dynamic Exploration and Exploitation: The Case of Online Lending, studies exploration/ exploitation trade-offs, and the value of dynamic extracting information in the context of online lendingThe first chapter is a joint work with Kresimir Mihic and Yinyu Ye. The Alternating Direction Method of Multipliers (ADMM) has gained a lot of attention for solving large-scale and objective-separable constrained optimization. However, the two-block variable structure of the ADMM still limits the practical computational efficiency of the method, because one big matrix factorization is needed at least once even for linear and convex quadratic programming. This drawback may be overcome by enforcing a multi-block structure of the decision variables in the original optimization problem. Unfortunately, the multi-block ADMM, with more than two blocks, is not guaranteed to be convergent. On the other hand, two positive developments have been made: first, if in each cyclic loop one randomly permutes the updating order of the multiple blocks, then the method converges in expectation for solving any system of linear equations with any number of blocks. Secondly, such a randomly permuted ADMM also works for equality-constrained convex quadratic programming even when the objective function is not separable. The goal of this paper is twofold. First, we add more randomness into the ADMM by developing a randomly assembled cyclic ADMM (RAC-ADMM) where the decision variables in each block are randomly assembled. We discuss the theoretical properties of RAC-ADMM and show when random assembling helps and when it hurts, and develop a criterion to guarantee that it converges almost surely. Secondly, using the theoretical guidance on RAC-ADMM, we conduct multiple numerical tests on solving both randomly generated and large-scale benchmark quadratic optimization problems, which include continuous, and binary graph-partition and quadratic assignment, and selected machine learning problems. Our numerical tests show that the RAC-ADMM, with a variable-grouping strategy, could significantly improve the computation efficiency on solving most quadratic optimization problems.The second chapter is a joint work with Yinyu Ye. Distributed optimization algorithms have been widely used in machine learning and statistical estimation, especially under the context where multiple decentralized data centers exist and the decision maker is required to perform collaborative learning across those centers. While distributed optimization algorithms have the merits in parallel processing and protecting local data security, they often suffer from slow convergence compared with centralized optimization algorithms. This paper focuses on how small amount of data sharing could benefit distributed optimization and learning for more advanced optimization algorithms.
ISBN: 9798382625942Subjects--Topical Terms:
2180177
Partial differential equations.
Essays in Large Scale Optimization Algorithm and Its Application in Revenue Management.
LDR
:04628nmm a2200349 4500
001
2400378
005
20240924103855.5
006
m o d
007
cr#unu||||||||
008
251215s2023 ||||||||||||||||| ||eng d
020
$a
9798382625942
035
$a
(MiAaPQ)AAI31017892
035
$a
(MiAaPQ)STANFORDnm713dt9490
035
$a
AAI31017892
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Zhu, Mingxi.
$3
3770347
245
1 0
$a
Essays in Large Scale Optimization Algorithm and Its Application in Revenue Management.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2023
300
$a
227 p.
500
$a
Source: Dissertations Abstracts International, Volume: 85-11, Section: A.
500
$a
Advisor: Ye, Yinyu;Wager, Stefan;Xu, Kuang;Mendelson, Haim.
502
$a
Thesis (Ph.D.)--Stanford University, 2023.
520
$a
This dissertation focuses on the large-scale optimization algorithm and its application in revenue management. It comprises three chapters. Chapter 1, Managing Randomization in the Multi-Block Alternating Direction Method of Multipliers for Quadratic Optimization, provides theoretical foundations for managing randomization in the multi-block alternating direction method of multipliers (ADMM) method for quadratic optimization. Chapter 2, How a Small Amount of Data Sharing Benefits Distributed Optimization and Learning, presents both the theoretical and practical evidences on sharing a small amount of data could hugely benefit distributed optimization and learning. Chapter 3, Dynamic Exploration and Exploitation: The Case of Online Lending, studies exploration/ exploitation trade-offs, and the value of dynamic extracting information in the context of online lendingThe first chapter is a joint work with Kresimir Mihic and Yinyu Ye. The Alternating Direction Method of Multipliers (ADMM) has gained a lot of attention for solving large-scale and objective-separable constrained optimization. However, the two-block variable structure of the ADMM still limits the practical computational efficiency of the method, because one big matrix factorization is needed at least once even for linear and convex quadratic programming. This drawback may be overcome by enforcing a multi-block structure of the decision variables in the original optimization problem. Unfortunately, the multi-block ADMM, with more than two blocks, is not guaranteed to be convergent. On the other hand, two positive developments have been made: first, if in each cyclic loop one randomly permutes the updating order of the multiple blocks, then the method converges in expectation for solving any system of linear equations with any number of blocks. Secondly, such a randomly permuted ADMM also works for equality-constrained convex quadratic programming even when the objective function is not separable. The goal of this paper is twofold. First, we add more randomness into the ADMM by developing a randomly assembled cyclic ADMM (RAC-ADMM) where the decision variables in each block are randomly assembled. We discuss the theoretical properties of RAC-ADMM and show when random assembling helps and when it hurts, and develop a criterion to guarantee that it converges almost surely. Secondly, using the theoretical guidance on RAC-ADMM, we conduct multiple numerical tests on solving both randomly generated and large-scale benchmark quadratic optimization problems, which include continuous, and binary graph-partition and quadratic assignment, and selected machine learning problems. Our numerical tests show that the RAC-ADMM, with a variable-grouping strategy, could significantly improve the computation efficiency on solving most quadratic optimization problems.The second chapter is a joint work with Yinyu Ye. Distributed optimization algorithms have been widely used in machine learning and statistical estimation, especially under the context where multiple decentralized data centers exist and the decision maker is required to perform collaborative learning across those centers. While distributed optimization algorithms have the merits in parallel processing and protecting local data security, they often suffer from slow convergence compared with centralized optimization algorithms. This paper focuses on how small amount of data sharing could benefit distributed optimization and learning for more advanced optimization algorithms.
590
$a
School code: 0212.
650
4
$a
Partial differential equations.
$3
2180177
650
4
$a
Decomposition.
$3
3561186
650
4
$a
Linear programming.
$3
560448
650
4
$a
Finance.
$3
542899
690
$a
0501
690
$a
0508
690
$a
0454
690
$a
0338
710
2
$a
Stanford University.
$3
754827
773
0
$t
Dissertations Abstracts International
$g
85-11A.
790
$a
0212
791
$a
Ph.D.
792
$a
2023
793
$a
English
856
4 0
$u
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=31017892
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9508698
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入