Language:
English
繁體中文
Help
回圖書館首頁
手機版館藏查詢
Login
Back
Switch To:
Labeled
|
MARC Mode
|
ISBD
An Approximate Dynamic Programming A...
~
Powers, Matthew J.
Linked to FindBook
Google Book
Amazon
博客來
An Approximate Dynamic Programming Approach to Future Navy Fleet Investment Assessments.
Record Type:
Electronic resources : Monograph/item
Title/Author:
An Approximate Dynamic Programming Approach to Future Navy Fleet Investment Assessments./
Author:
Powers, Matthew J.
Published:
Ann Arbor : ProQuest Dissertations & Theses, : 2022,
Description:
107 p.
Notes:
Source: Dissertations Abstracts International, Volume: 84-02, Section: B.
Contained By:
Dissertations Abstracts International84-02B.
Subject:
Military history. -
Online resource:
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=29162217
ISBN:
9798841751700
An Approximate Dynamic Programming Approach to Future Navy Fleet Investment Assessments.
Powers, Matthew J.
An Approximate Dynamic Programming Approach to Future Navy Fleet Investment Assessments.
- Ann Arbor : ProQuest Dissertations & Theses, 2022 - 107 p.
Source: Dissertations Abstracts International, Volume: 84-02, Section: B.
Thesis (Ph.D.)--George Mason University, 2022.
This item must not be sold to any third party vendors.
Navy decision makers and program sponsors must decide on future investments in the context of forecasted threats and operating environments. Investment assessments are difficult in that forecasted costs and utilities are oftentimes based on non-existent resources and technology. Forecasted projection model vectors are informed by current data that reflect similar or "close as possible" technologies, and are limited to scenario scope. That is, the common assessment modeling method of placing representative agents in a scenario-based simulation to assess future investment utilities are limited by scenario and design capabilities.The research objective is to combat the limitations of specific scenario-based analyses by modeling the operational lifespan of future Navy Destroyer (DDG) fleet configurations as Markov decision processes (MDPs) evaluated with dynamic programming (DP) value iteration to calculate the maximum DDG-configuration utility.xiiAll MDP parameters are informed by existing models and subject matter expert (SME) inputs. The transition probability matrices (TPMs) assess the probabilities that a DDG transitions between states as a chance function of future configuration capabilities and sequential actions that are more representative of the operational lifetime of a configured DDG than that of a single scenario. Likert type values are assigned to each pairwise decision-state so that Bellman's optimality equation solves for maximum expected value via non-discounted value iteration. These maximum expected values become the decision variable coefficients of an integer programming configuration-destroyer assignment model that maximizes the sum of destroyer-configuration values according to budgetary, logistic, and requirement-based constraints. DP value iteration is appropriate for this problem in that the algorithm does not require a time-value discount parameter and the objective is the maximum expected value, and I compare DP results to the approximate dynamic programming (ADP) method of Q-learning. Modeling with ADP removes the need for TPMs for large problem instances, thereby providing a framework for near-optimal decisions, and this research highlights the similarities in the solution between ADP and DP. ADP results align with DP results because the accurate ADP parameter settings enable learning and exploration that guarantees near-optimal ADP solutions, thereby opening the door for computationally scalable algorithms.This work contributes to SME and DM insight, mitigating bias towards technologically superior configurations by revealing utility values that make the seemingly less capable configurations more competitive in terms of long-term value. This insight is due to DP optimal policies and ADP near-optimal policies that are drivenxiiiby values of the states, an insight that would not have been possible without this research. This study demonstrates that the less advanced technologies can be deployed in such a way to maximize their long-term utility so that they are more valuable than expected in future operational environments.OPNAV desire for modeling methods that complement existing campaign models 0is evidenced by this method's briefing to incoming OPNAV analysts as a "best practice" for evaluating complex decisions. This study's contribution to the high-visibility R3B study received high-level recognition in a Presidential Meritorious Service Medal citation. This research contributes AI-enabled decision-making in in a culture that relies on familiar, anecdotal, or experience-based approaches. AI-enabled decision-making is necessary to compete with near-peer adversaries in dynamic decision-making environments.
ISBN: 9798841751700Subjects--Topical Terms:
552332
Military history.
Subjects--Index Terms:
Approximate dynamic Programming
An Approximate Dynamic Programming Approach to Future Navy Fleet Investment Assessments.
LDR
:04948nmm a2200385 4500
001
2394323
005
20240422070827.5
006
m o d
007
cr#unu||||||||
008
251215s2022 ||||||||||||||||| ||eng d
020
$a
9798841751700
035
$a
(MiAaPQ)AAI29162217
035
$a
AAI29162217
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Powers, Matthew J.
$3
3763792
245
1 3
$a
An Approximate Dynamic Programming Approach to Future Navy Fleet Investment Assessments.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2022
300
$a
107 p.
500
$a
Source: Dissertations Abstracts International, Volume: 84-02, Section: B.
500
$a
Advisor: Ganesan, Rajesh.
502
$a
Thesis (Ph.D.)--George Mason University, 2022.
506
$a
This item must not be sold to any third party vendors.
520
$a
Navy decision makers and program sponsors must decide on future investments in the context of forecasted threats and operating environments. Investment assessments are difficult in that forecasted costs and utilities are oftentimes based on non-existent resources and technology. Forecasted projection model vectors are informed by current data that reflect similar or "close as possible" technologies, and are limited to scenario scope. That is, the common assessment modeling method of placing representative agents in a scenario-based simulation to assess future investment utilities are limited by scenario and design capabilities.The research objective is to combat the limitations of specific scenario-based analyses by modeling the operational lifespan of future Navy Destroyer (DDG) fleet configurations as Markov decision processes (MDPs) evaluated with dynamic programming (DP) value iteration to calculate the maximum DDG-configuration utility.xiiAll MDP parameters are informed by existing models and subject matter expert (SME) inputs. The transition probability matrices (TPMs) assess the probabilities that a DDG transitions between states as a chance function of future configuration capabilities and sequential actions that are more representative of the operational lifetime of a configured DDG than that of a single scenario. Likert type values are assigned to each pairwise decision-state so that Bellman's optimality equation solves for maximum expected value via non-discounted value iteration. These maximum expected values become the decision variable coefficients of an integer programming configuration-destroyer assignment model that maximizes the sum of destroyer-configuration values according to budgetary, logistic, and requirement-based constraints. DP value iteration is appropriate for this problem in that the algorithm does not require a time-value discount parameter and the objective is the maximum expected value, and I compare DP results to the approximate dynamic programming (ADP) method of Q-learning. Modeling with ADP removes the need for TPMs for large problem instances, thereby providing a framework for near-optimal decisions, and this research highlights the similarities in the solution between ADP and DP. ADP results align with DP results because the accurate ADP parameter settings enable learning and exploration that guarantees near-optimal ADP solutions, thereby opening the door for computationally scalable algorithms.This work contributes to SME and DM insight, mitigating bias towards technologically superior configurations by revealing utility values that make the seemingly less capable configurations more competitive in terms of long-term value. This insight is due to DP optimal policies and ADP near-optimal policies that are drivenxiiiby values of the states, an insight that would not have been possible without this research. This study demonstrates that the less advanced technologies can be deployed in such a way to maximize their long-term utility so that they are more valuable than expected in future operational environments.OPNAV desire for modeling methods that complement existing campaign models 0is evidenced by this method's briefing to incoming OPNAV analysts as a "best practice" for evaluating complex decisions. This study's contribution to the high-visibility R3B study received high-level recognition in a Presidential Meritorious Service Medal citation. This research contributes AI-enabled decision-making in in a culture that relies on familiar, anecdotal, or experience-based approaches. AI-enabled decision-making is necessary to compete with near-peer adversaries in dynamic decision-making environments.
590
$a
School code: 0883.
650
4
$a
Military history.
$3
552332
653
$a
Approximate dynamic Programming
653
$a
Correspondence analysis
653
$a
Decision making
653
$a
Markov decision processes
653
$a
Fleet configurations
690
$a
0800
690
$a
0722
710
2
$a
George Mason University.
$b
Operations Research.
$3
3763793
773
0
$t
Dissertations Abstracts International
$g
84-02B.
790
$a
0883
791
$a
Ph.D.
792
$a
2022
793
$a
English
856
4 0
$u
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=29162217
based on 0 review(s)
Location:
ALL
電子資源
Year:
Volume Number:
Items
1 records • Pages 1 •
1
Inventory Number
Location Name
Item Class
Material type
Call number
Usage Class
Loan Status
No. of reservations
Opac note
Attachments
W9502643
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
On shelf
0
1 records • Pages 1 •
1
Multimedia
Reviews
Add a review
and share your thoughts with other readers
Export
pickup library
Processing
...
Change password
Login