NAVAL POSTGRADUATE SCHOOL

Monterey, California


Autonomous Agent-Based Simulation of an AEGIS Cruiser Combat Information Center Performing Battle Group Air-Defense Commander Operations

 

by

 

Sharif H. Calfee

 

March 2003

 

�� Thesis Co-Advisors:����������������������������������� Neil C. Rowe

����������������������������������������������������������������������� John Hiles


THESIS

 

This thesis done in cooperation with the MOVES Institute

Approved for public release; distribution is unlimited


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

THIS PAGE INTENTIONALLY LEFT BLANK



������������ REPORT DOCUMENTATION PAGE

Form Approved OMB No. 0704-0188

Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instruction, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to Washington headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington, VA 22202-4302, and to the Office of Management and Budget, Paperwork Reduction Project (0704-0188) Washington DC 20503.

1. AGENCY USE ONLY (Leave blank)

 

2. REPORT DATE

March 2003

3. REPORT TYPE AND DATES COVERED

Master�s Thesis

4. TITLE AND SUBTITLE:Autonomous Agent-Based Simulation of an AEGIS Cruiser Combat Information Center Performing Battle Group Air-Defense Commander Operations

5. FUNDING NUMBERS

 

6. AUTHOR(S) Sharif H. Calfee

7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES)

Naval Postgraduate School

Monterey, CA93943-5000

8. PERFORMING ORGANIZATION REPORT NUMBER���

9. SPONSORING /MONITORING AGENCY NAME(S) AND ADDRESS(ES)

N/A

10. SPONSORING/MONITORING

����� AGENCY REPORT NUMBER

11. SUPPLEMENTARY NOTESThe views expressed in this thesis are those of the author and do not reflect the official policy or position of the Department of Defense or the U.S. Government.

12a. DISTRIBUTION / AVAILABILITY STATEMENT

Approved for public release; distribution is unlimited

12b. DISTRIBUTION CODE

 

13.    ABSTRACT (maximum 200 words)

 

The AEGIS Cruiser Air-Defense Simulation is a program that models the operations of a Combat Information Center (CIC) team performing the ADC duties in a battle group using Multi-Agent System (MAS) technology implemented in the Java programming language.Set in the Arabian Gulf region, the simulation is a top-view, dynamic, graphics-driven software implementation that provides a picture of the CIC team grappling with a challenging, complex problem.Conceived primarily as a system to assist ships, waterfront training teams, and battle group staffs in ADC training and doctrine formulation, the simulation was designed to gain insight and understanding into the numerous factors (skills, experience, fatigue, aircraft numbers, weather, etc.) that influence the performance of the overall CIC team and watchstanders.The program explores the team�s performance under abnormal or high intensity/stress situations by simulating their mental processes, decision-making aspects, communications patterns, and cognitive attributes.Everything in the scenario is logged, which allows for the reconstruction of interesting events (i.e. watchstander mistakes, chain-of-error analysis) for use in post-scenario training as well as the creation of new, more focused themes for actual CIC team scenarios.The simulation also tracks various watchstander and CIC team performance metrics for review by the user.

14. SUBJECT TERMS

Battle Group Air-Defense, Multi-Agent Systems, Artificial Intelligence, Air-Defense Commander, Naval Simulations, Combat Information Center, Air-Defense Simulation, AEGIS, Cruiser, CG, Human-Computer Interface (HCI), Watchstander Training, Naval Air Defense, Threat Assessment, Decision Making, Cognitive Factors, AEGIS Doctrine, Air-Defense Doctrine, Interactive Training Systems, Watchstander Fatigue, Link-16/TADIL J, Link-11/TADIL A, USS Vincennes

15. NUMBER OF PAGES

 

16. PRICE CODE

17. SECURITY CLASSIFICATION OF REPORT

Unclassified

18. SECURITY CLASSIFICATION OF THIS PAGE

Unclassified

19. SECURITY CLASSIFICATION OF ABSTRACT

Unclassified

20. LIMITATION OF ABSTRACT

 

UL

NSN 7540-01-280-5500��������������������������������������������������������������������������������������������������������������������� Standard Form 298 (Rev. 2-89)

����������������������������������������������������������������������������������������������������������������������������������������������������������� Prescribed by ANSI Std. 239-18


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

THIS PAGE INTENTIONALLY LEFT BLANK


Approved for public release; distribution is unlimited

 

 

Autonomous Agent-Based Simulation of an AEGIS Cruiser Combat Information Center Performing Battle Group Air-Defense Commander Operations

 

 

Sharif H. Calfee

Lieutenant, United States Navy

B.S., United States Naval Academy, 1996

 

 

Submitted in partial fulfillment of the

requirements for the degree of

 

 

MASTER OF SCIENCE IN COMPUTER SCIENCE

 

 

from the

 

 

NAVAL POSTGRADUATE SCHOOL

March 2003

 

 

 

Author:������������ Sharif H. Calfee

 

 

Approved by:�������������� Neil C. Rowe

Thesis Co-Advisor

 

 

John Hiles

Thesis Co-Advisor

 

 

Peter J. Denning

Chairman, Department of Computer Science

 


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

THIS PAGE INTENTIONALLY LEFT BLANK


ABSTRACT

 

 

 

The AEGIS Cruiser Air-Defense Simulation is a program that models the operations of a Combat Information Center (CIC) team performing the ADC duties in a battle group using Multi-Agent System (MAS) technology implemented in the Java programming language.Set in the Arabian Gulf region, the simulation is a top-view, dynamic, graphics-driven software implementation that provides a picture of the CIC team grappling with a challenging, complex problem.Conceived primarily as a system to assist ships, waterfront training teams, and battle group staffs in ADC training and doctrine formulation, the simulation was designed to gain insight and understanding into the numerous factors (skills, experience, fatigue, aircraft numbers, weather, etc.) that influence the performance of the overall CIC team and watchstanders.The program explores the team�s performance under abnormal or high intensity/stress situations by simulating their mental processes, decision-making aspects, communications patterns, and cognitive attributes.Everything in the scenario is logged, which allows for the reconstruction of interesting events (i.e. watchstander mistakes, chain-of-error analysis) for use in post-scenario training as well as the creation of new, more focused themes for actual CIC team scenarios.The simulation also tracks various watchstander and CIC team performance metrics for review by the user.

 

 

 

 

 

 

 

 

 

 


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

THIS PAGE INTENTIONALLY LEFT BLANK

 


TABLE OF CONTENTS

 

 

 

I.��������� introduction........................................................................................................ 1

A.������� the aegis cruiser battle group Air-Defense simulation 1

B.������� scope of the cruiser Air-Defense simulatiOn proJECT... 2

1.�������� ADC Simulation Project Thesis........................................................... 2

2.�������� Interviews with Air-Defense Experts.................................................. 3

3.�������� ADC Simulation Design....................................................................... 3

4.�������� Testing and Analysis of ADC Simulation and Conduct of Reality Survey 5

C.������� relevance of the adc simulation in training for the complex and challenging task of Air-Defense OPERATIONS in the modern era���������� 6

1.�������� Situation of Concern............................................................................. 6

2.������ Current Training Needs and the ADC Simulation.............................. 6

a.�������� Current Situation..................................................................... 6

b.�������� The Need for New Systems to Assist Training Teams............ 6

c.�������� A Potential Solution................................................................ 7

D.������� brief history of naval AND battle group Air Defense... 7

E.�������� watchstander organization of a cruiser combat information center......................................................................................................................... 11

1.�������� Overview of a CIC Organization........................................................ 11

2.�������� Brief Description of the CIC Air-Defense Watchstanders.............. 12

a.�������� Force Tactical Action Officer (F-TAO)................................ 12

b.�������� Force Anti-Air Warfare Coordinator (F-AAWC)................. 12

c.�������� Ship Tactical Action Officer (S-TAO).................................. 12

d. ������� Ship Anti-Air Warfare Coordinator(S-AAWC).................. 13

e.�������� Electronic Warfare Control Officer (EWCO)....................... 13

f.��������� Radar Systems Controller (RSC)........................................... 13

g.�������� Tactical Information Coordinator (TIC).............................. 14

h.�������� Identification Supervisor (IDS)............................................. 14

i.��������� Combat Systems Coordinator (CSC)..................................... 14

j.��������� Missile Systems Supervisor (MSS)......................................... 14

k.�������� Red Crown (RC)..................................................................... 15

F.�������� application of multi-agent system technology in the adc simulation......................................................................................................................... 15

II.������� related work in the area of naval Air-Defense simulation 19

A.������� related work introduction........................................................ 19

B.������� Area Air-Defense commander (aadc) battle management system�������� 20

c.������� tactical decision-making under stress (tadmus) DECISION SUPPORT SYSTEM......................................................................................................... 21

d.������� multi-modal watch station (mmws) program.................. 23

e.�������� naval Air-Defense thReat assessment:cognitive factors model�������� 25

f.�������� Air threat assessment:research, model, and display guidelines������� 26

g.������� cognitive and behavioral task implications for three dimensional displays used in combat information/direction centers��� 28

h.������� battle force tactical training (bftt) SYSTEM.................. 29

i.��������� naval combat SIMULATION video games:the precursor to modern-day air-defense simulations.................................................................. 30

1.�������� Strike Fleet:The Naval Task Force Simulator�............................ 31

2.�������� Fifth Fleet�........................................................................................ 32

3.�������� Harpoon:Modern Naval Combat Simulation� Series Video Games..... 33

4.�������� Summary............................................................................................. 35

j.�������� comparison AND contrast of the cruiser adc simulation program��� 36

k.������� research questions posed for the cruiser adc simulation program 37

iII.������ User-Centered Design (UCD) process of the adc simulation human-computer interface (hci).................................................................................................... 39

A.������� need for utilization of user-centered design (ucd) process in developing computer program interfaces....................... 39

B.������� ucd process phase one:problem statement...................... 40

1. ������� Problem Statement............................................................................. 40

2.������ Activity/Utility to Users...................................................................... 41

3.������ Users................................................................................................... 41

4.������ Criteria for Judgment......................................................................... 41

C.������� ucd process phase two:requirements gathering......... 41

1. ������� Needs Analysis................................................................................... 41

a.�������� Situation of Concern.............................................................. 41

b.�������� Need/Utility of System............................................................ 42

c.�������� Features of System.................................................................. 43

2.�������� User Analysis...................................................................................... 44

a.�������� Utility of the Simulation........................................................ 44

b.�������� Collective Team Skills and Experience Required (User Characteristics)44

c.�������� Frequency of Simulation Use................................................ 45

3.�������� Task Analysis..................................................................................... 45

d.������� ucd process phase three:conceptual design of adc simulation program.................................................................................................... 46

1.�������� Conceptual Design Introduction........................................................ 46

2.�������� Conceptual Design.............................................................................. 47

a.�������� Agents...................................................................................... 47

b.�������� Objects..................................................................................... 48

c.�������� Necessary Attributes of Agents.............................................. 49

d.�������� Necessary Attributes of Objects............................................. 52

e.�������� Agent Relationship................................................................. 55

f.��������� Object Relationships.............................................................. 56

g.�������� Actions on Agents and Objects.............................................. 57

3.�������� Visual Design...................................................................................... 57

4.�������� Early Analysis..................................................................................... 58

a.�������� Reviewer #1 Comments.......................................................... 59

b.�������� Reviewer #2............................................................................. 59

e.�������� ucd process phase four:adc simulation interface implementation60

f��������� ucd process phase five:usability analysis of adc SIMULATION interface......................................................................................................................... 61

1.�������� Usability Analysis Introduction.......................................................... 61

2.�������� Task List Overview............................................................................ 62

3.�������� Subject Profile..................................................................................... 63

4.�������� Data Collection................................................................................... 64

5.�������� Analysis of Task Data........................................................................ 65

6.�������� Analysis of Subject Evaluation Surveys............................................ 65

a.�������� Screen Layout......................................................................... 65

b.�������� Overall Display Layout Relative for Menu-Bars and Pop-Up Menus������� 65

c.�������� Menu Location and Wording................................................. 66

d.�������� Ease of Performance of the Task Completion List............... 66

7.�������� Recommendations.............................................................................. 67

a.�������� Subject #1................................................................................ 67

b.�������� Subject #2................................................................................ 67

c.�������� Subject #3................................................................................ 67

d.�������� Subject #4................................................................................ 67

e.�������� Subject #5................................................................................ 68

g.������� ucd process phase six:interface modification/redesign��� 68

iV.������ description of the adc simulation program design AND structure��� 69

A.������� program language and SYSTEM REQUIREMENTS for adc simulation����� 69

B.������� discussion about multi-agent systems.................................. 69

1.�������� Coordinated Collaboration................................................................. 71

2.�������� Anticipative-Reactive Agents............................................................ 72

3.�������� Adaptation and Evolution................................................................... 73

4.�������� Cooperation within the Multi-Agent System..................................... 73

5.�������� Connector-Based Multi-Agent Systems (CMAS)............................ 75

C.������� OVERALL VISUAL DESIGN OF THE SIMULATION............................ 76

1.�������� Tactical Display.................................................................................. 76

2.�������� Contact Data Display......................................................................... 78

3.�������� Scenario Control Buttons................................................................... 79

4.�������� CIC Watchstander Display and Watchstander Attributes Display. 80

d.������� adc simulation program:menu options.............................. 81

1.������ File Menu Options.............................................................................. 81

2.������ Watchstander Attributes Menu......................................................... 82

3.�������� CIC Equipment Setup Menu.............................................................. 82

4.������ Scenario External Attributes Menu................................................... 83

5.������ Doctrine Setup Menu......................................................................... 83

6.�������� Simulation Logs Menu....................................................................... 83

7.�������� Task Times and Probabilities Menu.................................................. 84

8.�������� Time Factor Ratio and Simulation Time Windows............................ 84

E.�������� design/structure of AIRCRAFT contacts............................... 84

1.������ Overview............................................................................................. 84

2.�������� Aircraft Behaviors.............................................................................. 86

a.�������� Neutral Aircraft...................................................................... 86

b.������ Hostile Aircraft....................................................................... 86

c.�������� Friendly Aircraft.................................................................... 87

3.�������� Aircraft Contact Generation Module................................................ 88

f.�������� RELEVANT SIMULATION POP-UP WINDOWS..................................... 88

1.�������� Modify Contact Attributes Window (Figure 19)................................ 88

2.�������� Scenario Setup Wizard Selection Window(Figure 20)..................... 89

3.�������� Select Specific Contact Window(Figure 21).................................... 90

4.�������� Scenario Run Time Input Window(Figure 22)................................. 90

G.������� design/structure of watchstander agents....................... 90

1.�������� Watchstander Attributes.................................................................... 90

a.�������� Skills........................................................................................ 90

b.�������� Experience.............................................................................. 91

c.�������� Fatigue.................................................................................... 92

d.�������� Decision-Maker Types............................................................ 93

2.�������� Watchstander Communication........................................................... 94

a.�������� Input/Receive Message Queue............................................... 95

b.�������� Watchstander Message Priority Processor............................ 95

c.�������� High/Medium/Low Priority Message Queue......................... 95

d.�������� Watchstander Action Processor............................................. 95

e.�������� Output/Transmit Message Queue.......................................... 96

3.�������� Watchstander Agents Skill Listings.................................................. 96

H.������� combat information center (cic) Combat systems equipment���� 98

1.�������� Overview............................................................................................. 98

2.�������� SPY-1B Radar System....................................................................... 99

3.�������� SLQ-32 Electronic Signal Detection System................................... 101

4.�������� Identification Friend or Foe (IFF) System....................................... 102

5.�������� Link 11 (TADIL A)/Link 16 (TADIL J) System............................. 103

6.�������� External Communications System................................................... 104

7.�������� Vertical Launching System (Surface-to-Air Missiles).................... 104

8.�������� Close-In Weapons System (CIWS)................................................. 105

I.��������� simulation log records AnD event reconstruction. 105

1.�������� Overview........................................................................................... 105

2.�������� Scenario Events Log......................................................................... 105

3.�������� Watchstander Decision History Log............................................... 106

4.�������� CIC Equipment Readiness Log....................................................... 107

4.�������� Watchstander Performance Log...................................................... 107

5.�������� Parser/Analyzer Log......................................................................... 108

J.�������� adc simulation external/environmental attributes 109

1.�������� Overview........................................................................................... 109

2.�������� Atmosphere/Weather....................................................................... 109

3.�������� Contact Density................................................................................ 110

4.�������� Scenario Threat Level...................................................................... 110

5.�������� Hostile Contact Level....................................................................... 110

K.������� adc simulation DOCTRINE attributeS.................................... 111

1. ������� Overview........................................................................................... 111

2. ������� AEGIS Doctrine................................................................................ 111

L.�������� discussion of probability AND skill-time values in adc simulation���� 112

M.������ air-defense contact identification, threat assessment AND classification in the simulation............................................. 113

N.������� air-defense decision-making:inside the heads of the f-tao AND f-aawc watchstander agents.................................................................... 117

V.������� research question results AND EVALUATION OF THE SIMULATION�� 123

A.������� research question Introduction........................................... 123

1.�������� Overview........................................................................................... 123

2.�������� Testing Methodology....................................................................... 123

a.������ Scenario Default Settings.................................................... 123

b.�������� Number of Runs.................................................................... 124

c.�������� Limitation of Variability in Testing.................................... 124

3.�������� Philosophy of Testing and Data Results Analysis.......................... 124

4.�������� Philosophy of the Use of the ADC Simulation................................. 125

5.�������� Simulation Testing Input Settings and Measurements Lists......... 125

a.�������� Inputs and Functions........................................................... 125

b.�������� Independent Variables......................................................... 126

c.�������� Dependent Variables............................................................ 126

d.�������� Test Categories..................................................................... 126

B.������� radar systems controller (rsc) agent testing and analysis results....................................................................................................................... 127

1.�������� Expected Results Based on Air-Defense Expert Interviews......... 127

2.�������� Results from the Simulation (See Appendix C Section A for Graphs) 127

3. ������� Analysis of Results and Recommendations.................................... 129

a.�������� Radar Operations Skill Results........................................... 129

b.�������� Experience Level Results..................................................... 129

c.�������� Fatigue Level Results........................................................... 129

d.�������� SPY-1B Radar Results......................................................... 129

C.������� electronic warfare control officer (ewco) agent testing and analysis results................................................................................. 130

1.�������� Expected Results Based on Air-Defense Expert Interviews......... 130

2.�������� Results from the Simulation (See Appendix C Section A for Graphs) 130

3. ������� Analysis of Results and Recommendations.................................... 132

a.�������� ES Analysis Skill Results..................................................... 132

b.�������� Experience Level Results..................................................... 132

c.�������� Fatigue Level Results........................................................... 132

d.�������� SLQ-32 System Results........................................................ 133

d.������� force tactical action officer (f-tao) agent testing and analysis results..................................................................................................... 133

1.�������� Expected Results Based on Air-Defense Expert Interviews......... 133

2.�������� Results from the Simulation (See Appendix III Section A for Graphs)����� 133

3. ������� Analysis of Results and Recommendations.................................... 134

a.�������� Situation Analysis Skill Results.......................................... 134

b.�������� Experience Level Results..................................................... 134

c.�������� Fatigue Level Results........................................................... 135

d.�������� Decision-Maker Type Results.............................................. 135

E.�������� combat information center (cic) watch team attribute PROFILE testing and analysis........................................................................ 135

1.�������� Expected Results Based on Air-Defense Expert Interviews......... 135

a.�������� Trial Profile Summary......................................................... 135

b.�������� Expectations......................................................................... 135

2.�������� Results from the Simulation (See Appendix C Section A for Graphs) 136

3. ������� Analysis of Results and Recommendations.................................... 136

f.�������� combat information center (cic) watch team testing and analysis of weather options................................................................................. 136

1.�������� Expected Results Based on Air-Defense Expert Interviews......... 136

2.�������� Results from the Simulation (See Appendix C Section A for Graphs) 137

3. ������� Analysis of Results and Recommendations.................................... 137

g.������� results of the survey of the atrc detachment, san diego air-defense experts..................................................................................................... 137

1.�������� Survey Overview.............................................................................. 137

2.�������� RSC Watchstander Questions and Results.................................... 138

a.�������� Questions Posed.................................................................... 138

b.�������� Results (See Appendix C Section B for Graphs)................. 139

c.�������� Analysis and Recommendations.......................................... 139

3.�������� EWCO Watchstander Questions and Results................................ 140

a.�������� Questions Posed.................................................................... 140

b.�������� Results (See Appendix C Section B for Graphs)................. 141

c.�������� Analysis and Recommendations.......................................... 141

4.�������� F-TAO Watchstander Questions and Results................................. 141

a.�������� Questions Posed.................................................................... 141

b.�������� Results (See Appendix C Section B for Graphs)................. 142

c.�������� Analysis and Recommendations.......................................... 142

5.�������� CIC Team Questions and Results................................................... 143

a.�������� Questions Posed.................................................................... 143

b.�������� Results (See Appendix C Section B for Graphs)................. 144

c.�������� Analysis and Recommendations.......................................... 144

6.�������� Additional CIC Team Questions and Results................................. 144

a.�������� Questions Posed.................................................................... 144

b.�������� Results (See Appendix C Section B for Graphs)................. 145

c.�������� Analysis and Recommendations.......................................... 145

vI.������ future work and development of the cruiser adc simulation�� 147

A.������� future work introduction......................................................... 147

B.������� future work to expand the scope and detail of the adc simulation�� 148

1.�������� Implement Networked Simulation of Battle Group Air-Defense Operations������� 148

2.�������� Implement a More Detailed Watchstander Fatigue/Vigilance Model 150

3.�������� Implement Aircraft Contacts as Watchstander Agents.................. 150

4.�������� Implement a More Detailed Log Parser Using XML.................... 151

5.�������� Implement a More Detailed Capability for AEGIS and Air-Defense Doctrine���� 151

6.�������� Implement Alternate Scenario Locations........................................ 151

7.�������� Implement More Detailed Treatment of SPY-1B Radar System, SLQ-32 System, and Communications System.................................................................. 152

8.�������� Conduct a More In-Depth Study of Metrics for Watchstander Performance Attributes (OR/OA)............................................................................................ 152

9.�������� Implement the Capability to Replay Previous Scenarios and/or Portions of Those Scenarios........................................................................................................... 153

10.������ Implement the Capability to Build Scenarios with Specified Contact Aircraft of Various Types and Behaviors........................................................................ 153

c.������� future work TO adapt the adc simulation for advanced training of watchstanders................................................................................... 153

1.�������� First Phase Single Watchstander Training System......................... 153

2.�������� Second Phase Multi-Watchstander, Interlinked Training System. 154

vii.���� summary AND conclusion......................................................................... 157

appendix A.ucd process phase three data................................................. 159

A.������� conceptual design sketches...................................................... 159

Appendix B.ucd process phase five data..................................................... 163

A.������� Analysis of Task Data...................................................................... 163

B.������� Simulation Evaluations................................................................ 177

1.�������� Evaluation Charts (Number of Errors and Task Completion Times) 177

C.������� Simulation Evaluation Surveys............................................... 191

1.�������� Evaluation Survey Charts (Average and Raw Data)...................... 191

Appendix C.simulation evaluation results AND Air-Defense expert survey results................................................................................................................. 195

A.������� adc simulation evaluation results...................................... 195

1.�������� Evaluation Results for the RSC Watchstander Agent................... 195

2.�������� Evaluation Results for the EWCO Watchstander Agent............... 201

3.�������� Evaluation Results for the Force TAO Watchstander Agent......... 207

4.�������� Evaluation Results for the CIC Team Comparison Trials............. 211

5.�������� Evaluation Results for the SCENARIO WEATHER Trials.......... 213

B.������� Air-Defense EXPERT SURVEYS of adc simulation performance������� 215

1.�������� Individual and Averaged Survey Results for the RSC Watchstander Questions215

2.�������� Individual and Averaged Survey Results for the EWCO Watchstander Questions��������� 217

3.�������� Individual and Averaged Survey Results for the Force TAO Watchstander Questions�� 219

4.�������� Individual and Averaged Survey Results for CIC Team Questions 221

5.�������� Individual and Averaged Survey Results for Additional CIC Team Questions���� 223

bibliography................................................................................................................. 225

INITIAL DISTRIBUTION LIST........................................................................................ 227

 

 

 

 

LIST OF FIGURES

 

 

 

Figure 1.���������� ADC Simulation Interface.................................................................................... 1

Figure 2.���������� CIC Air-Defense Organization........................................................................... 11

Figure 3.���������� ADC Simulation MAS Overview Diagram......................................................... 17

Figure 4.���������� Cognitively Based Model of Threat Assessment................................................. 26

Figure 5.���������� Threat Assessment Model.................................................................................. 27

Figure 6.���������� Strike Fleet� Video Game............................................................................... 32

Figure 7.���������� Fifth Fleet� Video Game.................................................................................. 33

Figure 8.���������� Harpoon Series� Video Games........................................................................ 34

Figure 9.���������� Preliminary Conceptual Sketches of ADC Simulation GUI.................................. 58

Figure 10.�������� Early Implementation of ADC Simulation GUI before Usability Analysis.............. 61

Figure 11.�������� Updated ADC Simulation GUI following Usability Analysis................................ 68

Figure 12.�������� ADC Simulation Tactical Display....................................................................... 77

Figure 13.�������� ADC Simulation Aircraft Classification Icons...................................................... 78

Figure 14.�������� Contact Data Display......................................................................................... 79

Figure 15.�������� Scenario Control Buttons Display....................................................................... 79

Figure 16.�������� CIC Watchstander Display and Watchstander Attributes Display........................ 81

Figure 17.�������� ADC Simulation Main Menu Bar....................................................................... 81

Figure 18.�������� Generalized Aircraft Contact Object.................................................................. 85

Figure 19.�������� Modify Contact Attributes Popup Window........................................................ 89

Figure 20.�������� Scenario Setup Wizard Selection Popup Window.............................................. 89

Figure 21.�������� Select Specific Contact Popup Window............................................................. 90

Figure 22.�������� Scenario Run Time Input Popup Window........................................................... 90

Figure 23.�������� Message Handling Structure for all Watchstander Agents.................................... 95

Figure 24.�������� Link 16 Example............................................................................................. 104

Figure 25.�������� Scenario Events Log........................................................................................ 106

Figure 26.�������� Watchstander Decision History Log................................................................. 106

Figure 27.�������� CIC Equipment Readiness Log........................................................................ 107

Figure 28.�������� Watchstander Performance Log....................................................................... 108

Figure 29.�������� Parser/Analyzer Log........................................................................................ 109

Figure 30.�������� Weather Conditions Window........................................................................... 109

Figure 31.�������� Contact Density Window................................................................................. 110

Figure 32.�������� Scenario Threat Level Window........................................................................ 110

Figure 33.�������� Hostile Contact Level Window........................................................................ 111

Figure 34.�������� AEGIS (Auto-Special) Doctrine Popup Window............................................. 112

Figure 35.�������� Skill Probabilities Modification Window........................................................... 113

Figure 36.�������� Watchstander Agent Collaborative Contact Detection and Reporting Process... 114

Figure 37.�������� Generic Air Contact Classification Path............................................................ 117

Figure 38.�������� Contact Classification Artificial Neuron............................................................ 119

Figure 39.�������� Battle Group Simulation of Air-Defense Operations.......................................... 149

Figure 40.�������� Live Watchstanders Participating in Air-Defense Training Simulation................. 154

Figure 41.�������� Early Menu Design Sketches for ADC Simulation............................................. 159

Figure 42.�������� Early Menu Design Sketches for ADC Simulation............................................. 159

Figure 43.�������� Early Menu Design Sketches for ADC Simulation............................................. 160

Figure 44.�������� Early Menu Design Sketches for ADC Simulation............................................. 160

Figure 45.�������� Early Menu Design Sketches for ADC Simulation............................................. 161

Figure 46.�������� Average Number of Errors per Task................................................................ 177

Figure 47.�������� Errors During Performance of Tasks................................................................ 178

Figure 48.�������� Average Number of Errors per Task................................................................ 179

Figure 49.�������� Errors During Performance of Tasks................................................................ 180

Figure 50.�������� Average Number of Errors per Task................................................................ 180

Figure 51.�������� Average Number of Performance of Tasks....................................................... 181

Figure 52.�������� Errors During Performance of Tasks................................................................ 182

Figure 53.�������� Average Task Completion Time....................................................................... 183

Figure 54.�������� Total Time to Complete Tasks......................................................................... 184

Figure 55.�������� Average Task Complete Time.......................................................................... 185

Figure 56.�������� Total Time to Complete Tasks......................................................................... 186

Figure 57.�������� Average Task Completion Time....................................................................... 187

Figure 58.�������� Total Time to Complete Tasks......................................................................... 188

Figure 59.�������� Average Task Completion Time....................................................................... 189

Figure 60.�������� Total Time to Complete Tasks......................................................................... 190

Figure 61.�������� Screen Layout Survey Averages...................................................................... 191

Figure 62.�������� Survey Scores................................................................................................. 191

Figure 63.�������� Overall Display Layout Survey Averages.......................................................... 192

Figure 64.�������� Survey Scores................................................................................................. 192

Figure 65.�������� Menu Location and Wording Survey Averages................................................. 193

Figure 66.�������� Survey Scores................................................................................................. 193

Figure 67.�������� Task Completion Survey Averages.................................................................. 194

Figure 68.�������� Survey Scores................................................................................................. 194

Figure 69.�������� RSC Averaged Times-Radar Operations Skill Level......................................... 195

Figure 70.�������� RSC Averaged Errors-Radar Operations Skill Levels....................................... 196

Figure 71.�������� RSC Averaged Number Attempted CIC Classifications, Radar Operations Skill Level.196

Figure 72.�������� RSC Averaged Times-Experience Level.......................................................... 197

Figure 73.�������� RSC Averaged Number Attempted CIC Classifications-Experience Level........ 197

Figure 74.�������� RSC Averaged Times-Fatigue Level................................................................ 198

Figure 75.�������� RSC Averaged Errors-Fatigue Levels.............................................................. 198

Figure 76.�������� RSC Averaged Number Attempted CIC Classifications-Fatigue Level.............. 199

Figure 77.�������� RSC Averaged Times-SPY-1B Radar Readiness Level................................... 199

Figure 78.�������� RSC Averaged Errors-SPY-1B Radar Readiness Levels................................. 200

Figure 79.�������� RSC Averaged Number Attempted CIC Classifications-SPY-1B Radar Readiness Level.�������� 200

Figure 80.�������� EWCO Averaged Times-ES Analysis Skill....................................................... 201

Figure 81.�������� EWCO Averaged Errors-ES Analysis Skill...................................................... 201

Figure 82.�������� EWCO Averaged Number Attempted CIC Classifications-ES Analysis Skill.... 202

Figure 83.�������� EWCO Averaged Times-Experience Level...................................................... 202

Figure 84.�������� EWCO Averaged Errors-Experience Level...................................................... 203

Figure 85.�������� EWCO Averaged Number Attempted CIC Classifications-Experience Level.... 203

Figure 86.�������� EWCO Averaged Times-Fatigue Levels.......................................................... 204

Figure 87.�������� EWCO Averaged Errors-Fatigue Levels.......................................................... 204

Figure 88.�������� EWCO Averaged Number Attempted CIC Classifications-Fatigue Level......... 205

Figure 89.�������� EWCO Averaged Times-SLQ-32 System Readiness Levels............................ 205

Figure 90.�������� EWCO Averaged Errors-SQL-32 System Readiness Levels............................ 206

Figure 91.�������� EWCO Averaged Number Attempted Classifications-SLQ-32 System Readiness Level.���������� 206

Figure 92.�������� Force TAO Averaged Times-Situational Awareness Skill Level........................ 207

Figure 93.�������� Force TAO Averaged Classifications Errors (Percentage)-Situation Assessment Skill Level.����� 207

Figure 94.�������� Force TAO Averaged Times-Experience Levels.............................................. 208

Figure 95.�������� Force TAO Averaged Classification Errors (Percentage) � Experience Level.... 208

Figure 96.�������� Force TAO Averaged Times � Fatigue Levels.................................................. 209

Figure 97.�������� Force TAO Averaged Classification Errors (Percentage) � Fatigue Levels........ 209

Figure 98.�������� Force TAO Averaged Times-Decision-Make Type.......................................... 210

Figure 99.�������� Force TAO Averaged Classification Errors (Percentage) � Decision-Maker Type. 210

Figure 100.������ CIC Team Profile Trials Averaged Times......................................................... 211

Figure 101.������ CIC Team Profile Trials Averaged # of Classification Errors (Percentage)......... 211

Figure 102.������ CIC Team Profile Trials Averaged # of Attempted Classifications..................... 212

Figure 103.������ Scenario Weather Trials Averaged Times......................................................... 213

Figure 104.������ Scenario Weather Trials Averaged # of Classification Errors (Percentage)........ 213

Figure 105.������ Scenario Weather Trials Averaged # of Attempted CIC Classifications............. 214

Figure 106.������ Respondent Survey Results for RSC Simulation Questions................................ 215

Figure 107.������ Averaged Survey Results for RSC Simulation Questions................................... 216

Figure 108.������ Respondent Survey Results for EWCO Simulation Questions........................... 217

Figure 109.������ Averaged Survey Results for EWCO Simulation Questions............................... 218

Figure 110.������ Respondent Survey Results for Force TAO Simulation Questions..................... 219

Figure 111.������ Averaged Survey Results for Force TAO Simulation Questions........................ 220

Figure 112.������ Respondent Survey Results for CIC Team Simulation Questions....................... 221

Figure 113.������ Averaged Survey Results for CIC Team Simulation Questions.......................... 222

Figure 114.������ Respondent Survey Results for Additional CIC Team Simulation Questions...... 223

Figure 115.������ Averaged Survey Results for Additional CIC Team Simulation Questions.......... 224

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

THIS PAGE INTENTIONALLY LEFT BLANK


LIST OF TABLES

 

 

 

Table 1.����������� F-TAO............................................................................................................. 49

Table 2.����������� TAO................................................................................................................. 49

Table 3.����������� F-AAWC......................................................................................................... 49

Table 4.����������� AAWC............................................................................................................. 50

Table 5.����������� CSC................................................................................................................. 50

Table 6.����������� RSC.................................................................................................................. 50

Table 7.����������� MSS................................................................................................................. 51

Table 8.����������� Red Crown....................................................................................................... 51

Table 9.����������� EWCO............................................................................................................. 51

Table 10.��������� TIC................................................................................................................... 51

Table 11.��������� IDS................................................................................................................... 52

Table 12.��������� List of Tasks...................................................................................................... 63

Table 13.��������� Usability Analysis Attributes............................................................................... 64

Table 14.��������� Listing of Watchstander Messages..................................................................... 96

Table 15.��������� CIC Equipment Levels of Performance.............................................................. 99

Table 16.��������� Systems Associated with Specified Watchstander Agents................................... 99

Table 17.��������� Abstracted Model SLQ-32 System Operational Model.................................... 102

Table 18.��������� Five Categories for the IFF Systems................................................................ 103

Table 19.��������� Abstracted Model IFF System Operational Model........................................... 103

Table 20.��������� Force TAO Contact Selection Prioritization Criteria......................................... 115

Table 21.��������� Force AAWC Contact Selection Prioritization Criteria..................................... 115

Table 22.��������� Ship TAO Contact Selection Prioritization Criteria............................................ 116

Table 23.��������� RSC Contact Selection Prioritization Criteria.................................................... 116

Table 24.��������� EWCO Contact Selection Prioritization Criteria................................................ 116

Table 25.��������� IDS Contact Selection Prioritization Criteria..................................................... 116

Table 26.��������� TIC Contact Selection Prioritization Criteria..................................................... 117

Table 27.��������� Red Crown Contact Selection Prioritization Criteria.......................................... 117

Table 28.��������� Evaluation Input Cues Used by the Watchstanders........................................... 118

Table 29.��������� Default Classification Threshold Values............................................................ 120

Table 30.��������� Scoring (Weighted) Values for the Various Input Cues..................................... 121

Table 31.��������� Radar Operations Skill Tests............................................................................ 127

Table 32.��������� Experience Level Tests.................................................................................... 128

Table 33.��������� Fatigue Level Tests.......................................................................................... 128

Table 34.��������� SPY-1B Radar Tests....................................................................................... 128

Table 35.��������� Electronic Signal (ES) Analysis Skill Tests........................................................ 130

Table 36.��������� Experience Level Tests.................................................................................... 131

Table 37.��������� Fatigue Level Tests.......................................................................................... 131

Table 38.��������� SLQ-32 System Radar Tests........................................................................... 131

Table 39.��������� Situation Awareness Skill Tests........................................................................ 133

Table 40.��������� Experience Level Tests.................................................................................... 133

Table 41.��������� Fatigue Level Tests.......................................................................................... 134

Table 42.��������� Decision-maker Type Tests............................................................................. 134

Table 43.��������� CIC Watch Team Attribute Profile Tests.......................................................... 136

Table 44.��������� Weather Option Tests...................................................................................... 137

Table 45.��������� Results of RSC Questions................................................................................ 139

Table 46.��������� Results of EWCO Questions............................................................................ 141

Table 47.��������� Results of F-TAO Questions............................................................................ 142

Table 48.��������� Results of CIC Team Watchstander Questions................................................. 144

Table 49.��������� Results of CIC Team Watchstander Questions................................................. 145

Table 50.��������� Errors During Performance of Tasks................................................................ 181

 


ACKNOWLEDGMENTS

 

 

 

The completion of my thesis was the culmination of nearly a year of intense research and development, and this experience has been an extraordinarily rewarding one for me, academically and professionally.However, this thesis as well as the enriching experience it provided me would have not been possible without the assistance and guidance of a number of exceptional individuals listed below.These people shared their invaluable knowledge and experience with me, and I am profoundly grateful for their contribution to my academic and professional growth.To all of you, Thank You!

 

Naval Postgraduate School, Monterey California

 

Neil C. Rowe, PhD � Professor, Department of Computer Science

John Hiles � Research Professor, Modeling Virtual Environments and Simulation

(MOVES) Institute

 

Donald Gaver, PhD � Associate Professor, Department of Operations Research

Patricia Jacobs, PhD � Associate Professor, Department of Operations Research

 

Robert Harney, PhD � Associate Professor, Department of Systems Engineering

 

AEGIS Training & Readiness Center (ATRC), Detachment San Diego, California

����������� LCDR J. Lundquist, USN

����������� LT Brian Deters, USN

�����������

OSCS (SW) Mackie, USN

����������� OSC (SW) Couch, USN

����������� OSC (SW) Coleman, USN

 

Fleet Technical Support Center, Pacific (FTSCPAC)

����������� FCC (SW) Timothy Simmons, USN

 

Command, Space and Naval Warfare Systems Center (COMSPAWARSYSCEN) San Diego, California

����������� Glenn Osga, PhD

 

I would like to express my appreciation to COMSPAWARSYSCEN for funding my research through the SPAWAR Research Fellowship Program.


The following is a list of course that were considerably beneficial to the completion of my thesis:

 

MV-4015������� Agent-Based Autonomous Behavior for Simulations

CS-4920��������� Expert Systems/CS-4311 (Directed Study)

CS-4310��������� Artificial Intelligence Techniques for Military Applications

MV-4203������� Human-Computer Interaction

CS-4322��������� Artificial Intelligence & Knowledge Engineering Seminar

MV-4920������� Human Agents

MV-4920������� Multi-Agent System Shipboard Damage Control Trainer (Directed Study)

CS-4554��������� Computer Network Modeling & Design

CS-3310��������� Artificial Intelligence

CS-3773��������� Java as a Second Language

 

 

 


I.������ introduction

A.������� the aegis cruiser battle group Air-Defense simulation

The Air-Defense Commander (ADC) Simulation is a top-view, dynamic, Java language-based, graphics-driven software implementation of an AEGIS Cruiser Combat Information Center (CIC) team performing the Battle Group Air-Defense Commander duties in the Arabian Gulf region.Designed using multi-agent systems technology, it is a fully interactive and customizable program that allows the user to configure a wide variety of the simulation parameters to create unique and realistic air-defense scenarios.The program simulates the mental processes, decision-making aspects, cognitive attributes, and communications of an eleven-member CIC air-defense team performing their duties under stressful conditions caused by the requirement to maintain an overall situational awareness of the battle group�s airspace.Below, in Figure 1, is displayed the graphical user interface (GUI) of the ADC Simulation.

 

                                                                                                                        Figure 1.                     ADC Simulation Interface.

The ADC Simulation was designed to assist in gaining insight and understanding on the effects on a CIC team or watchstander of variation in the following variables:

                    Watchstander skill levels

                    Watchstander experience levels

                    Watchstander decision-maker types

                    Watchstander fatigue levels

                    Combat systems equipment readiness level

                    Aircraft density(number of aircraft)

                    Aircraft types (hostile, unknown, friendly)

                    Scenario threat level

                    Weather conditions

                    AEGIS and air-defense commander battle doctrines

The ADC Simulation gives the users the capability to design and run scenarios that generate realistic problems a CIC team could encounter.The GUI allows them to then watch as these scenarios unfold and observe the performance of the simulated CIC team based on the user-specified configuration.Additionally, the user can modify any of the scenario attributes �on-the-fly� to explore different potential outcomes.Lastly, all of the events in the scenario are logged for each watchstander and combat systems equipment, which allows for the reconstruction of particular events of interest (i.e. watchstander mistakes, misidentification of aircraft chain-of-error analysis, etc.).The simulation also includes a capability to review the performance metrics of each watchstander (number of errors, average time to complete tasks) to ascertain the degree to which modifying various attributes influences the simulated watchstander�s performance.

B.������� scope of the cruiser Air-Defense simulatiOn proJECT

1.�������� ADC Simulation Project Thesis

The first phase of this project was an extensive review and analysis of formal scientific literature (reports, papers, books, etc.) and research on the subject of battle group air defense, decision-making under stress, cognitive factors in decision-making, air-defense simulations, other air-defense-related projects, and multi-agent systems.The second phase involved the conduct of in-depth, detailed interviews with air-defense training experts from the AEGIS Training and Readiness Center (ATRC), San Diego Detachment, to gather direct data from experienced personnel.The third phase dealt with the design and development of the actual ADC Simulation.The next phase involved a comprehensive testing of the simulation using parametric analysis, and the recording of the results.The final phase used the results of the simulation to produce an ADC Simulation Realism Survey that was taken by the ATRC air-defense experts to assess the level of accuracy of the simulation compared to their professional experiences.

2.�������� Interviews with Air-Defense Experts

The interviews with the air-defense experts at the ATRC detachment in San Diego focused on the various attributes of the watchstanders with the objective of trying to determine the relationships between these attributes and the performance of the CIC team, both collectively and individually.The specific attributes discussed during these interviews were skill levels, experience levels, fatigue levels, and decision-maker types.A considerable portion of the attribute discussions revolved around the debate of differentiating skill from experience in watchstander performance (which will be further discussed in Chapter IV as part of the design of watchstanders in the simulation).These topics were further analyzed to determine varying levels of performance for each of the attributes(i.e., Basic, Experienced, and Expert Skill Levels) and a set of skill types were assigned for each watchstander.To complement the skill types, estimates for probability of success in the conduct of tasks associated with each watchstander�s skills were formulated and maximum task times (based on the air-defense experts� experiences) were assigned.

3.�������� ADC Simulation Design

Utilizing the research gathered from the interviews and the formal scientific literature research, the ADC Simulation was developed within multi-agent system architecture.The simulation is classified with the following characteristics:

                    Dynamic � The model represents a system as it changes over time.

                    Stochastic � The model contains one or more random variables that influence the events in the simulation.

                    Continuous-State Model � The state variables are continuous.

                    Continuous-Time Model � The system state is defined at all times.

                    Exogenous � The model describes activities and events in the environment that affect the system.

                    Stable � Dynamic behavior of the model is independent of time.

                    Closed model � All input is generated internal to the model.

The ADC Simulation was designed with the following features:

                    Graphical User Interface - Displays the aircraft contacts in the battle group�s operational air space with the capability to interact with them to determine the CIC team�s assessment of their classification.

                    Implements the following watchstanders:

                    Force Tactical Action Officer (F-TAO)

                    Force Anti-Air Warfare Coordinator (F-AAWC)

                    Ship Tactical Action Officer (S-TAO)

                    Ship Anti-Air Warfare Coordinator (S-AAWC)

                    Radar Systems Controller (RSC)

                    Electronic Warfare Control Officer (EWCO)

                    Identification Supervisor (IDS)

                    Tactical Information Coordinator (TIC)

                    Combat Systems Coordinator (CSC)

                    Missile Systems Supervisor (MSS)

                    Red Crown Watchstander (RC)

                    Implements for each watchstander the following attributes:

                    Skill Types (various)

                    Experience Level

                    Fatigue Level

                    Decision-maker Type (F-TAO, F-AAWC, S-TAO, S-AAWC)

                    Stimulates the following combat systems equipment:

                    SPY-1B Radar System

                    SLQ-32 Electronic Signal Detection System

                    Link 11 (TADIL A) / Link 16 (TADIL J) System

                    Identification Friend or Foe (IFF) System

                    External Communications System

                    Vertical Launching System (VLS) � Surface-to-Air Missiles

                    Close-In Weapon System (CIWS)

 

                    Implements the following external environment attributes:

                    Scenario Weather Options

                    Scenario Threat Level Options

                    Scenario Contact Density (Numbers) Options

                    Scenario Hostile Contact Level (Numbers) Options

                    Implements an option to activate AEGIS doctrine (Auto-special).

                    Implements the following log/data recording features:

                    Overall Scenario Events Log (Major Events)

                    Decision History Log for each Watchstander

                    Readiness Log for each Combat Systems Equipment

                    Performance Metric Log for each Watchstander

4.�������� Testing and Analysis of ADC Simulation and Conduct of Reality Survey

The final phase of the ADC Simulation Project consisted of the comprehensive testing and analysis of the ADC Simulation followed by the assessment of the level of reality of the simulation via a survey given to the air-defense experts at the ATRC Detachment in San Diego.As part of testing of the simulation, the following questions were postulated and parametric analysis performed to gather data (the number of errors, the averaged times to complete tasks):

                    For the RSC watchstander, what is the effect of varying the skill, experience, fatigue, and SPY-1B radar equipment readiness levels (singly) on individual watchstander and CIC team performance?

                    For the EWCO watchstander, what is the effect of varying the skill, experience, fatigue, and SLQ-32 system equipment readiness levels (singly) on individual watchstander and CIC team performance?

                    For the F-TAO watchstander, what is the effect of varying the skill, experience, and fatigue levels and decision-maker types (singly) on individual watchstander and CIC team performance?

                    For the CIC team watchstander, what is the difference in performance between a CIC team led by an expert but exhausted F-TAO and consisting of a basic/newly qualified but fully rested CIC team opposite a basic/newly qualified but fully rested F-TAO leading an expert but exhausted CIC team?

                    What is the effect of varying the weather attributes on the CIC team performance?

Once this data was collected and analyzed, an ADC Simulation Realism survey was created which used the results of the testing to develop scenarios for the questions.The questions in the survey were designed to elicit responses from the air-defense experts on the level of realism if the simulation based on their professional experiences.

C.������� relevance of the adc simulation in training for the complex and challenging task of Air-Defense OPERATIONS in the modern era

1.�������� Situation of Concern

Air Warfare is the most rapid, intense, and devastating type of warfare that the U.S. Navy currently trains for, and battle group operations are primarily focused on gaining proficiency in this mission area.Due to the fast pace uncertain, and dangerous aspects of air warfare, the battle group commander�s Air-Defense Team (the AEGIS cruiser CIC Team) must be trained extensively in the fundamental tenets of these operations in order to effectively protect the aircraft carrier, high-value units, and other naval ships in the vicinity.With the immense range of duties and responsibilities, there are multitudes of individual watchstation-specific and collective skill sets that must be mastered in order to effectively perform the ADC duties.

2.������ Current Training Needs and the ADC Simulation

a.�������� Current Situation

The waterfront training teams (AEGIS Training and Readiness Center (ATRC detachments) are charged by the fleet type commanders with providing the Air-Defense Commander (ADC) training to the cruisers, and the quality of the training they provide is typically outstanding.However, the ADC operations are considerably complex, and the waterfront training teams are limited by the available training time as well as the scope of the training attempted.Furthermore, interactions (watchstander to watchstander, ship to ship, ship to aircraft, watchstander to equipment, etc.) that are part of daily operations are numerous and potential ADC team performance deficiencies may not be noticed during the limited training periods.

b.�������� The Need for New Systems to Assist Training Teams

The limitations of human comprehension of ADC operations due to the countless interactions places a barrier on the level, type, and quality of training that can be accomplished.Because there are many different variables to account for in these operations, the training teams and ships must only rely on their collective past experiences for producing effective training.This limits the potential gain of the training since the training teams and ships must formulate ADC scenarios for the future based on experiences from the past because it is simply too much for humans to analyze all of the variables involved.This begs the question, how can the Navy design training that integrate smoothly with the current (and expected future) CIC team proficiency levels (skills, experience, equipment setup, etc.) to support and improve the training requirements for the ships and waterfront training commands?This training would need to use the valuable experience of the ships and training commands to create scenarios, which accurately simulate the enormous complexities inherent in ADC operations.To surpass this limitation, both groups require a system that will enable them to build scenarios, based on the current skill and training levels of the ADC team as well as the environment they will face, to assist them in training towards more realistic threats.

c.�������� A Potential Solution

The ADC simulation could provide a solution to the problems discussed above.After an initial assessment of the training, experience, and equipment readiness levels of a specific ship, the initial settings for the ADC team and environment can be inputted into the system.Upon completion of the setup, the program will allow the training teams (as well as the ships) to create simulations based upon the ship�s potential operational scenarios in order to discover the performance deficiencies.The training teams and ships would use the results from the simulation to provide more focused training on the areas where deficiencies were noted.The ADC Simulation could also be useful to battle group staffs to assist in the planning and development of battle group air-defense tactics and operations.Also, the program can be employed to validate the usefulness of future scenarios intended for use in the training of the ships.For the doctrine-formulation commands, this simulation will give them the opportunity to evaluate the validity of theoretical changes to ADC and AEGIS doctrine before implementing them in the fleet.

D.������� brief history of naval AND battle group Air Defense

Modern battle group air defense is the collective effort by the naval warships and carrier air wing to protect, first, the aircraft carrier and other high value units such as amphibious and supply ships and second, the fleet�s warships from attack, disablement, and/or destruction by hostile air, naval, and shore forces.Essentially, the primary focus of battle group air defense is the preservation of its assets to ensure the ability to project military power ashore in support of the United States� strategic objectives.It is primarily an intensive search, detection, and classification process to accurately determine and maintain positive identification al all aircraft and surface vessels within the battle group�s operational area.

The topic and problem of naval and battle group air defense was thrust upon the United States Navy in the early 1920s when General Billy Mitchell demonstrated the vulnerability of naval vessels to air power by sinking a battleship with bombs launched from his aircraft, radically altering the vision and conduct of warfare at sea.As World War II would prove, the aircraft carrier, not the battleship, was the primary means by which nations (the United States foremost among them) would project military power onto foreign shores.The aircraft carriers became the centerpiece of the United States� strategy to drive back the Imperial Japanese Fleet, recapture its lost possessions, and capture victory.Recognizing the threat of the aircraft carrier, the Japanese quickly refocused their attacks from the battleships to the carriers forcing a similar realignment in the thinking of the United States Navy.Additionally, the technology of radar became a widely used and effective tool to organize the protection of the battle group.The massive surface fleets of the Navy were now assigned another new primary task:Defend the aircraft carrier.

Early naval air defenses relied upon massive, uncoordinated fire from anti-aircraft artillery such as 20mm, 40mm, three-inch, and five-inch guns�Air defense was made up of a series of local anti-air battles fought close aboard, strictly in self defense.[1]

Towards the end of the war, the danger of the kamikazes led to another reorganization and innovation in battle group air defense known as defense-in-depth.

Tactics evolved quickly, including tightly grouped defensive ship formations and picket ships for early warning.Although primitive by current standards, the concept of effective, coordinated defense-in-depth took shape.[2]

Following World War II, the 1950s and 1960s ushered rapid advances in offensive military technology, which required similar changes in tactics and tactics in defensive tactics to protect the battle group.Foremost among these advances were the introduction of jet power and unmanned missiles, especially, anti-ship missiles.

The advent of unmanned missiles and long-range Soviet bombers led the Navy to develop defensive weapons and enhance ship-to-ship coordination�In the 1950s, the Navy began deploying three guided SAM variants known as 3-T missiles:long-range Talos, medium-range Terrier, and short-range Tartar.Simultaneously, a large-scale program to convert previously non-missile ships to missile shooters was initiated with vessels capable of firing one of these missiles.[3]

However, the continual advances (speed, maneuverability, and accuracy) in offensive anti-ship missile technologies reached a point where, despite the capability of the defensive missiles to intercept, the human watchstander became the weak point in the overall air-defense system.The watchstanders were unable to communicate, coordinate, and react quickly enough to defend against the most advanced and deadly of missile technologies.

Faster and more reliable means of surveillance and identification data exchange were required.The Navy tactical data system (NTDS) was introduced in 1958, the world�s first shipboard tactical data system based on programmable computers.This was an initial step in the integration of multi-ship systems in a force-wide air-defense system.[4]

Advances in the capabilities of NTDS would allow for quicker and more accurate transmission of critical air-defense data for battle group air defense.Eventually, airborne early warning aircraft such as the E-2A Hawkeye (which is still in use as the upgraded E-2C variant) were deployed to increase the surveillance range of the battle group.By the early 1980s, the long-term AEGIS project reached fruition and the first cruisers carrying the powerful SPY-1 phased array radar, which was integrated into a potent command and control system, was introduced into the fleet.�Introduced operationally in 1983, the heart of the AEGIS weapon system is the SPY-1 phased array radar, which provides automatic detection and fire control quality tracking for hundreds of targets simultaneously.�[5]The AEGIS cruisers (Ticonderoga class) followed by the AEGIS destroyers (Arleigh Burke class) tremendously improved the Navy�s capability to perform battle group air defense and countered a multitude of previously dangerous anti-ship missile threats.Since the AEGIS fleet�s arrival, the last twenty years have been marked by the steady advance/counter-advance of offensive versus defensive weapons.The offensive strides in technology were characterized by greater increases in speed and lethality of anti-ship missiles.Similar measures were achieved in defensive missile systems, but one of the significant advances in overall battle group air defense occurred with the development of Link 11 followed by its more effective follow-on Link 16.These battle group data exchange systems (descended from the original NTDS) markedly increased the capability of the battle group units to effectively coordinate their detection information and actions.

Battle Group Air Defense has continued to be a primary mission for the United States Navy.In the late 1980s, two incidents highlighted the need for the research into two fields of study to enhance naval performance, the psychology of decision-making under stress and human factors design.The first incident involved the USS Stark, which was attacked by two Exocet anti-ship missiles and was nearly sunk.The second incident occurred in 1988 and involved the USS Vincennes, which mistakenly shot down a civilian Iranian airliner during a surface battle with Iranian naval forces.Caused by several factors involving CIC communications among watchstanders and exacerbated by CIC systems produced with poor human-computer interactive design, the Vincennes believed it was involved in a coordinated hostile air-sea battle and reacted accordingly.Both incidents caused the United States Navy to reassess the importance of the human being in the entire air-defense process, a topic that had previously been relegated to a lower priority to that of technological advances.Some of the most prominent research studies and projects that resulted from these incidents are discussed in Chapter II, and the overall thrust of these documents assert that the watchstander should always be at the forefront of the understanding of the performance of battle group air-defense operations.The ADC Simulation was developed with this premise in mind.

E.�������� watchstander organization of a cruiser combat information center

1.�������� Overview of a CIC Organization

Onboard naval ships, the Combat Information Center (CIC) is the nexus of all of the ship�s tactical operations, and it is from this location that the commanding officer and watch teams coordinate these activities.Often several different warfare operations are being conducted simultaneously from the CIC including Air Warfare (which Battle Group Air Defense is a part of), Surface Warfare, Undersea Warfare, and Strike Warfare.Information is channeled into the CIC for review, analysis, and assessment by the appropriate warfare teams and, following the decisions by the commanding officer or Tactical Action Officer; the CIC team, performs the required actions.Displayed below (Figure 2) is the organizational diagram of the CIC air-defense team implemented in the ADC Simulation.Dashed lines indicate indirect leadership control of the watch team members under the specified watchstander.

 

                                                                                                                Figure 2.                     CIC Air-Defense Organization.

 

The CIC contains a comprehensive assortment of equipment and combat systems to support the watch team foremost among them tactical systems consoles.These consoles have a wide variety of uses such as activating weapon systems (launching missiles, firing guns), configuring sensory systems (radar, IFF systems), displaying contact tracks (aircraft, ships, submarines, etc.), modifying/displaying this track information, and communicating externally with other ships and aircraft.Additionally, an internal communications system allows the watchstanders to communication with each other.

2.�������� Brief Description of the CIC Air-Defense Watchstanders

a.�������� Force Tactical Action Officer (F-TAO)

The Force Tactical Action Officer is in overall control of the air-defense operations for the battle group and is responsible for most of the major decisions.Decisions made at lower levels can be overridden by the F-TAO, if deemed necessary.Most importantly, the F-TAO makes the final decisions on contact classifications as well as weapon batteries release for ship and aircraft missile engagements.The F-TAO and F-AAWC work very closely to coordinate the air-defense operations of the battle group.

b.�������� Force Anti-Air Warfare Coordinator (F-AAWC)

The Force Anti-Air Warfare Coordinator directly runs the air-defense identification process within the battle group�s surveillance airspace picture and is responsible to the F-TAO for the performance of this process.The F-AAWC coordinates the movement and assignment of friendly aircraft via the external communication circuit to other ships and the Red Crown watchstation.The F-AAWC can order repositioning of aircraft and orders for visual intercept/identification of unknown aircraft, but requires the authorization of the F-TAO to order an engagement of an aircraft with weapon systems.Additionally, the F-AAWC via the F-TAO is responsible for the ordering of weapons employment to battle group air-defense ships.

c.�������� Ship Tactical Action Officer (S-TAO)

The Ship Tactical Action Officer leads the CIC watch team for the ship and is responsible for most of the major decisions made during air-defense operations for the S-TAO�s ship only.The Ship TAO is responsible for the defense of his or her ship and is authorized to employ weapon systems in its defense, if a perceived threat of danger from attack is imminent.On ships not assigned performing the ADC duties, the S-TAO is charge of the CIC watch team and works for the F-TAO on the ADC ship.

d. ������� Ship Anti-Air Warfare Coordinator(S-AAWC)

The Ship Anti-Air Warfare Coordinator directs the aircraft detection and classification process within the ship�s airspace and is responsible to the Ship TAO for the performance of the team ID process.Although subordinate to the Ship TAO, the S-AAWC receives a substantial amount of air-defense tasking from the F-AAWC, who is coordinating the overall battle group air-defense process.Among other responsibilities, the S-AAWC controls the movement of friendly aircraft assigned to the ship and, upon proper authorization, employs the ship�s self-defense missile weapons system via the Missile Systems Supervisor.On ships not assigned performing the ADC duties, the S-AAWC directs the CIC team in the performance of the air-defense duties.

e.�������� Electronic Warfare Control Officer (EWCO)

The Electronic Warfare Control Officer is responsible for the operation of the electronic emissions detection equipment, which is used to detect and classify various types of aircraft based on their radar signal emissions.These radar signal emissions are one of the primary means by which the CIC air-defense team distinguishes friendly and neutral aircraft from potentially hostile/unfriendly aircraft.Although working directly for the Ship TAO, on the air-defense commander cruiser, the Force TAO and Force AAWC also use this watchstander�s reports to assist them in their duties.

f.��������� Radar Systems Controller (RSC)

The Radar Systems Controller operates the SPY-1A/B radar systems which are the primary means by which aircraft are detected and tracked by the ship.Often, radar detections are the first indications of the presence of an aircraft, and the initial kinematic data (course, speed, altitude, location) influence the initial assessment of the aircraft�s threat potential and priority for observation.Although working directly for the Ship TAO, on the air-defense commander cruiser, the Force TAO and Force AAWC also use this watchstander�s reports to assist them in their duties.

 

 

g.�������� Tactical Information Coordinator (TIC)

The Tactical Information Coordinator operates and maintains the Tactical Digital Information Link (TADIL) A/Link 11 and TADIL J/Link 16, which communicates tactical data among the friendly ships and aircraft in the battle group.These interlinks allow the friendly units to possess an expanded view of the battle group�s airspace, increasing overall tactical situation awareness.On the ADC cruiser, the TIC has more demanding duties and is responsible for the coordination and control of the entire battle group�s Link 11/16 picture.The quality of this picture is of significant importance to the primary air-defense decision-makers (Force TAO, Force AAWC).

h.�������� Identification Supervisor (IDS)

The Identification Supervisor is primarily responsible for performing Identification Friend or Foe (IFF) system challenges on unknown aircraft and inputting the results of this information (and other relevant identification data) into the CIC track database (AEGIS Command and Display system) for viewing by other watchstanders.Additionally, when directed, the IDS will initiate query and/or warning procedures against specified contacts via the external communications system.The results of these challenges assist the primary decision-makers in the classification of aircraft contacts.

i.��������� Combat Systems Coordinator (CSC)

The Combat Systems Coordinator is in charge of the activation, monitoring, and deactivation of the primary and secondary combat systems that support the CIC.Combat systems equipment degradations and failures are reported to the CSC for resolution and repair.The CSC also is the primary lead for initiating troubleshooting procedures for certain combat systems equipment including the communication systems, Link 11/16 systems, and IFF systems.Additionally, the CSC is directly responsible for the input, activation, and deactivation of AEGIS doctrine (weapons, IFF, and identification).

j.��������� Missile Systems Supervisor (MSS)

The Missile Systems Supervisor is directly responsible for the employment (firing) of the ship�s surface-to-air missiles and the self-defense Close-In Weapon System (CIWS).The MSS works directly for the Ship AAWC and receives authorizations to activate weapon systems from that watchstander.

k.�������� Red Crown (RC)

The Red Crown watchstander is responsible for checking friendly aircraft (both launching from and returning to the aircraft carrier) to verify their identity and mission assignment.These duties require the Red Crown to validate IFF code assignments and communicate with the aircraft directly.After being cleared by Red Crown, the aircraft are allowed to proceed on their assignment mission or continue their approach to the carrier.

F.�������� application of multi-agent system technology in the adc simulation

The ADC Simulation watchstanders were implemented using a multi-agent system (MAS) technology where each of them was designed as �agents.�Within the context of this simulation, an agent is a component of software with the following characteristics:

                    It is capable of acting in an environment.

                    It can communicate directly with other agents.

                    It is driven by a set of tendencies (in the form of individual objectives or of a satisfaction/survival function which it tries to optimize).

                    It possesses resources of its own.

                    It is capable of perceiving its environment (but to a limited extent).

                    It has only a partial representation of this environment.

                    It possesses skills and can offer services.

                    Its behavior tends toward satisfying its objectives, taking account of the resources and skills available to it and depending on its perception, its representations and the communications it receives.[6]

Essentially, the watchstander agents in the ADC Simulation contain intent and objectives (perform their assigned duties) communicate amongst each other to achieve their objectives, and possess resources (skill, experience, fatigue, and decision-maker type attributes as well as combat systems equipment).They perceive their environment to a limited extent since each watchstander agent either receives this information via combat systems sensory equipment or through verbal communications (from other watchstander agents) or CIC watchstation information display systems.The watchstander agents offer services to each other by disseminating information vital to their performance of the air-defense duties and operations of the CIC and influence within the environment (and other agents) through their actions (i.e. Force TAO classification of aircraft as Hostile).

MAS technology is a blending of the cognitive/social sciences (psychology, ethology, sociology, philosophy), the natural sciences (ecology, biology), and the computer sciences since they simultaneously model, explain, and simulate natural phenomena (in this case human behavior in the ADC Simulation) and provide models for self-organization.[7]Traditionally programming is often very mechanistic, hierarchical, and modular and, subsequently, does not lend itself well to simulating the often surprising (whether organized or chaotic) behavior of interactive human and environmental systems.However, MAS technology is less restrictive in its design, which produces simulation behavior often more akin to that observed in the real world.The term �multi-agent system� is applied to a system comprising the following elements:

                    An environment, E, that is, a space which generally has a volume.

                    A set of objects, O.These objects are situated; that is to say, it is possible at any given moment to associate any object with a position in E.These objects are passive, that is, they can be perceived, created, destroyed and modified by the agents.

                    An assembly of agents, A, which are specified objects (A O) representing the active entities of the system.

                    An assembly of relations, R, which link objects (and thus agents) to each other.

                    An assembly of operations, Op, making it possible for the agents of A to perceive, produce, consume, transform and manipulate objects from O.

                    Operations with the task of representing the application of these operations and the reaction of the world to this attempt at modification.[8]

In the ADC Simulation, the watchstander agents perform their duties within a layer of environments (Combat Information inside of the AEGIS cruiser within the battle group�s operational area) that contain a multitude of objects (aircraft contacts).The watchstander agents have the capability to execute a set of operations to perceive the environment as well as the objects in it and communicate with each other.Conversely, the objects within the ADC Simulation environment can also perform operations to perceive and interact with the AEGIS cruiser (and thus affecting the watchstander agents) and aircraft carrier.These operations are governed by relationships that determine the scope and degree to which the operations can occur.The diagram below (Figure 3) provides an overview of the implementation of MAS technology in the ADC Simulation.

 

                                                                                               Figure 3.                     ADC Simulation MAS Overview Diagram.

 

The integration of the agents, and assembly of relations among the agents and objects, assembly of operations among the agents and objects into an environment within the ADC Simulation produces a highly dynamic and realistic model of a challenging and complex task performed by humans.


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

THIS PAGE INTENTIONALLY LEFT BLANK

II.����� related work in the area of naval Air-Defense simulation

A.������� related work introduction

During the research for the design and development of the ADC Simulation, extensive resources were discovered relating to the subject of air defense, human-computer interaction, cognitive modeling, team training systems, naval simulation, and threat assessment.It was noted that the most numerous research into the topic of air defense was often sanctioned or supported by the United States Navy or other affiliated organizations and a sharp increase in such research, notably in the areas of decision-making under stressful conditions and air-threat assessment, occurred starting in the late 1980s.From reviewing the research, we determined that the likely cause of this surge was due to the USS Vincennes incident, which sparked an effort by the Navy to understand the underlying factors (mental, physical, and informational) affecting the crew�s performance during the shoot-down of the Iranian commercial airliner.Some of this research resulted in follow-on projects by the Navy to develop better Combat Information Center (CIC) consoles for the watchstanders to improve performance.Other related work originated in the commercial software development sector where a multitude of video games that simulate naval operations have been produced.The following papers, projects, systems, and programs were most relevant to the research, development, and implementation of the ADC Simulation:

                    Area Air-Defense Commander (AADC) Battle Management System

                    Tactical Decision Making Under Stress (TADMUS) Decision Support System

                    Multi-Modal Watch Station (MMWS) Program

                    Naval Air-Defense Threat Assessment: Cognitive Factors Model

                    Air Threat Assessment:Research, Model, and Display Guidelines

                    Cognitive and Behavioral Task Implications for Three Dimensional Displays Used in Combat Information/Direction Centers

                    Battle Force Tactical Training (BFTT) System

                    Naval Combat Simulation Video Games

B.������� Area Air-Defense commander (aadc) battle management system

The Area Air-Defense Commander (AADC) Battle Management System was developed by the Navy for more effective coordination of air-defense planning and execution for multi-service (i.e. Army, Air Force, Navy, & Marines) and coalition (international) operations, following the Gulf War.The primary mission for an Area Air-Defense Commander is to develop and execute a theater-wide air-defense plan to support the strategic and operational plans of the Joint Forces Commander (JFC) during an operation.Prior to the development of the AADC system, such air-defense planning could only be accomplished manually, a laborious task that �used to take 10 to 15 people hours or even days to generate air-defense plan.�[9]To complicate matters, it was necessary to conduct numerous evaluations and war-gaming scenarios against the plan to evaluate its effectiveness.However, this process was significantly limited by the bounds of human performance since only a minimum number of scenario variables could be modified and tested before the analysis became unwieldy and intractable.

Designed and developed by John Hopkins University�s Advanced Physics Laboratory (APL), the AADC System had to attain two objectives.First it would �provide a single, integrated picture of the battle-space so that a joint commander can quickly gather data on air and missile attacks and defend against them.�[10]This would greatly enhance the AADC�s ability to maintain an accurate view the operational area.Second, the AADC System would allow the air-defense staff to rapidly create, modify, and evaluate plans through system�s automated uses which substantially reduced the time of the process.

A number of complex issues surround planning and coordinating wide-area air defense�These variables represent hundreds of courses of action combinations that planners must consider�Every time you change a variable, you change the results�The AADC can repeatedly war game a plan against possible enemy attacks, running a complete scenario up to 25 times to verify results.[11]

Given this capability, the air-defense staff could now evaluate and analyze an air-defense plan, including the extraordinary number of possible variables in a scenario, with a greater level of confidence than previously possible since a larger number of potential outcomes could be explored.

The AADC System is similar to the ADC Simulation in two ways.First, both programs are designed to improve the Navy�s ability to conduct air defense.Second, the AADC System and ADC Simulation allow the users to modify variables in the programs to explore the potential outcomes that might result from those changes.However, these programs differ significantly in their objectives and focus in that the AADC System is developed for theater-wide, strategic and operational planning by the AADC while the ADC Simulation concentrates on battle group air defense as well as the performance of the ADC watchstanders and is implemented using a multi-agent systems architecture.

c.������� tactical decision-making under stress (tadmus) DECISION SUPPORT SYSTEM

The Tactical Decision-Making Under Stress (TADMUS) study was one of the first comprehensive explorations into the causes of the USS Vincennes incident.

The congressional investigation of this incident suggested that emotional stress may have played a role in contributing to this incident and the TADMUS program was established to assess how stress might affect decision making and what might be done to minimize those effects.[12]

The TADMUS study revealed the Combat Information Center consoles and systems in use during the timeframe of the study (late 1980s to early 1990s) contained significant Human-Computer Interaction (HCI) flaws which degraded watchstander performance under stressful conditions in high contact density littoral environments.The direct result of these flaws was that

Teams exhibited periodic losses of situation awareness, often linked with limitations in human memory and shared attention capacity.Environmental stressors such as time compression and highly ambiguous information increased decision biases.[13]

The following problems were identified with the short-term memory limitations:

                    Mixing up track numbers and forgetting track numbers.

                    Mixing up track kinematic data and forgetting track kinematic data.

                    Associating past track related events/actions with the wrong track and associating completed own-ship actions with wrong track.[14]

A second set of problems were categorized as decision bias-related and included the following:

                    Carrying initial threat assessment throughout the scenario regardless of new information (framing error).

                    Assessing a track based on information other than associated with the track (e.g., old intelligence data, past decision-maker experiences, etc.).[15]

All of these problems occurred during the USS Vincennes shoot-down of the Iranian commercial airliner, and the results of the study demonstrated the significant negative impact the HCI design of the CIC consoles had on the watchstanders.Once the research and analysis of the current CIC systems was completed, the TADMUS program embarked on a second phase of research with the goal to develop improved CIC display consoles.This system, known as the Decision Support System (DSS), had the following objectives:

                    Minimize the mismatches between cognitive processes and the data available in the CIC to facilitate decision-making.

                    Mitigate the shortcomings of current CIC displays in imposing high information-processing demands and exceeding the limitations of human memory.

                    Display the data in the CIC in graphical rather than numeric representations wherever appropriate.[16]

The evaluation of the DSS component during training simulations determined that the new system greatly improved the overall performance of the air-defense teams, especially in the area of situational awareness.Additionally, the watchstander participants rated the DSS component with a higher level of usability than existing CIC console displays.

The TADMUS and DSS research programs contained two relevant issues to the development of the ADC Simulation.The TADMUS project was among the initial studies conducted to examine the cognitive processes of naval air-defense personnel during stressful situations.The mental models proposed to explain the decision-making process of these personnel created a foundation for more detailed subsequent work.Many of the general principles of watchstander cognition formulated from the TADMUS and DSS projects were incorporated into the design of the ADC Simulation.Second, the TADMUS program identified two categories of cognitive errors (short-term memory limitation and decision bias) that occurred when watchstanders experienced stressful situations while performing air-defense operations.These errors ultimately led to the loss of situational awareness by the watchstanders.To replicate a reasonable level of realism in the program, the watchstanders in the ADC Simulation were designed so that generic approximations of the cognitive errors listed above occur (based on a random probability function) during scenarios.

d.������� multi-modal watch station (mmws) program

The Multi-Modal Watch Station (MMWS) program was a four-year project focused on the development of specialized watchstation consoles that incorporated improved human-computer interface (HCI) designs to improve the performance of watch-teams during battle group air defense and land-attack warfare operations.

MMWS is a concept design for a future command and control decision support system intended to serve as a test prototype to develop human-computer interface (HCI) design recommendations for future Navy combat and command/control information systems.[17]

Diverging from past, traditional CIC console engineering processes, the MMWS program initially performed a detailed analysis of air-defense watchstanders� behaviors, interactions, and processes to determine all of the requirements for their task lists and workload.

Key requirements were identified related to the user tasks or workload and included mission, work management, communication, and HCI control.User support concepts were developed and refined in relation to work management user tasks, which included the ability to assist the user in the selection of tasks and work strategies�Further, research in workload management led to the refinement of models to access and predict workload during real-time tactical operations.[18]

The comprehensive design phase was a significant departure from the process typically used by military contracting corporations because its primary focus on developing a system that supported the user�s task and work requirements as opposed to forcing the user to adapt his or her requirements to the system.The MMWS project also advanced another substantial research initiative that could transform shipboard CIC console design.Currently, the Navy contracts for the building of large hardware console systems which results in an extraordinary cost to maintain the supporting infrastructure for contractors, initial installation, repair parts, and the training pipeline for technicians.During the development of the MMWS, �� a Java version of the software was developed to test the feasibility of transition for the HCI components into a fielded naval software system.�[19]If adopted as standard for future implementation in future naval ships, this would allow the Navy to divorce itself from investing in highly expensive, inflexible hardware system and move towards a common computer display system that would run a software implementation of the older console systems.Such a step could substantially reduce the cost (production, installation, maintenance, and training) of CIC console systems while also ensuring that upgrades would occur more frequently and at a reduced cost.

The MMWS consoles developed during the project implemented decision-aid user-support tools to increase usability and learnability and decrease the potential for information overload and errors.The MMWS research conducted extensive interviews and console evaluations with air-defense subject matter experts, which resulted in several succeeding versions of the system.At the conclusion of the project, the team showed a suite of MMWS consoles that corrected many of the HCI design problems inherent in the current set of AEGIS CIC consoles, which caused information overload, increased the likelihood of errors, and aggravated the potential for loss of situational awareness.During a comprehensive system evaluation, the project team demonstrated the MMWS consoles could reduce the size of the typical air-defense team by 2-3 people while increasing their overall performance levels.

e.�������� naval Air-Defense thReat assessment:cognitive factors model

Another investigation examined the cognitive aspects of the threat assessment process used by naval air-defense officers during battle group operations are referenced here.The research evaluated personnel during exercises and operations to determine the factors in the decision-making of identifying and classifying air contacts.

Factors are the elements of data and information that are used to assess air contacts.Traditionally, they are derived from kinematics, tactical, and other data.Examples of such data include course, speed, IFF mode, and type of radar emitter.[20]

The research indicated the watchstanders mentally maintained a range of possible-track templates, derived from a set of twenty-two identifying factors, which they used to classify contacts and calculate threat assessments.Some of the most promising factors are listed below:

                    Electromagnetic Signal (ES) Emissions

                    Course (with respect to the battle group)

                    Speed

                    Altitude

                    Point of Origin

                    Identification Friend or Foe (IFF) Modes 1,2,3, 4, C

                    Flight Profile

                    Intelligence Information

�Threat assessment is defined�as the process of evaluating aircraft that are flying in the vicinity of one�s ship, and determining how much of a threat they represent to the ship as well as the battle group.�[21]A contact�s factors and other data were compared against relevant templates, and the template with the highest degree of fit was used to identify the air contact as well as make a threat assessment.The figure below inferred the threat assessment process.

                                                                                     Figure 4.                     Cognitively Based Model of Threat Assessment[22].

 

This research and the ADC Simulation were consistent because the latter models the mental decision-making and threat-assessment processes of the Combat Information Center (CIC) watchstanders as part of their process of identifying and classifying air contacts.During the initial stages of research gathering, we conducted in-depth interviews with several air-defense experts at the AEGIS Training & Readiness Center (ATRC) Detachment in San Diego, which focused on the above contact threat assessment and identification processes.The results of our interviews confirm the previous work�s findings about factors, templates, and the contact threat assessment and identification processes.A similar cognitive model, including the factors and templates, was implemented in the Air-Defense Simulation for the air-defense decision-making of the CIC watchstander agents.

f.�������� Air threat assessment:research, model, and display guidelines

The paper reviews several other studies, including the one cited in Section E in an ongoing study into the practice of air threat assessment of contacts during battle group air-defense operations.

The studies provided a theoretical and applied basis for threat assessment by defining specific cue-data relationships and detailing the cognitive processes involved in air defense simulation assessment.Those processes were incorporated into a proposed model of threat assessment that was successfully validated against threat ratings from experienced air defense decision makers.[23]

The studies addressed by the paper include the Tactical Decision Making Under Stress (TADMUS) program, the subsequent Decision Support System (DSS), and the Basis For Assessment (BFA) tool, and several papers covering Naturalistic Decision Making (NDM).The knowledge gained from these studies was used toward developing a new threat assessment model (displayed below) and the creation/update of guidelines for displaying contact threat assessment data.The original TADMUS research led to a follow-on project, DSS, to implement specialized air-defense displays for watchstanders.These displays were designed to enhance the performance of the air-defense personnel by providing them with critical data for decision-making while preventing information/screen overload.

 

                                                                                                                     Figure 5.                     Threat Assessment Model[24].

Updated threat assessment interface guidelines were recommended:

                    Display a threat assessment window on-screen when a track is hooked.

                    Compute and display the threat ratings of tracks.

                    Show threat rating history.

                    Provide a list of all assessment cues.

                    Order cues by importance to the decision maker.

                    Show the impact of each cue on overall threat rating.

                    Provide a track priority list.[25]

Using these recommendations, the authors produced a limited prototype for demonstration.

g.������� cognitive and behavioral task implications for three dimensional displays used in combat information/direction centers

This paper reviews the behavioral and cognitive task analysis of the Joint Maritime Command Information System (JMCIS) for the purpose of determining whether the implementation of three-dimensional displays would be useful.The objective of JMCIS is to produce a Common Tactical Picture (CTP) for the battle group or joint-force commander to ensure the maintenance of battlespace situational awareness.As part of this objective, the CTP is designed to integrate the undersea warfare (USW), mine warfare (MW), Surface Warfare (SW), Air Warfare (AW), Amphibious Warfare (AMW), and C4ISR (Command, Control, Communications, Computers, Intelligence, Surveillance, and Reconnaissance) battlespace pictures into one comprehensive display.During the analysis of the HCI of most combat system displays, this study cited many of the problems associated with poor display designs from the TADMUS project including short-term memory limitations and loss of situational awareness.

The researchers identified situational awareness as the primary area concern during the task analysis and selected the Applied Cognitive Task Analysis (ACTA) methodology (developed by Klein and Associates), which addresses the mental models of both novices and experts.

Situation awareness, as defined by Endsley, is a threefold process including (1) perception of the elements in the environment within a volume of time and space, (2) comprehension of the meaning, and (3) projection of status in the near future.At the first cognitive level, the user detects the target cues or objects in the environment.During the second cognitive level, the perceived information is processed and integrated into an assessment of the situation.At the third cognitive level, new projected outcomes are formulated for the situation.[26]

The study further asserted that situational awareness was also affected by the following four factors:(1) capabilities, (2) training and experience, (3) preconceptions and objectives, (4) and ongoing task workload.Taking into account all of these factors, �as task workload and stress increase, decision-makers will often lose a �Big Picture� awareness and focus on smaller elements.�[27]

The discussion of situational awareness in this study had a particular relevance to the development of the ADC Simulation.The four factors affecting situational awareness mentioned above were incorporated into the design of the watchstander agents in the simulation.Our interviews with the air-defense subject matter experts validated the conclusions the researchers made concerning situational awareness, especially the factors of training and experience with task workload.

h.������� battle force tactical training (bftt) SYSTEM

The Battle Force Tactical Trainer System was designed for the fleet-wide training of naval units by providing each ship with a comprehensive training system (using the existing CIC console architecture) run by a specialized computing system and is primarily used for air-defense training of the CIC team.

The BFTT system�provides Commanding Officers, the Afloat Training Organization (ATO) and Battle Group/Battle Force (BG/BF) commanders with the ability to conduct coordinated, realistic, high stress combat system training for developing war fighting proficiency and maintaining combat readiness.[28]

On the unit level, BFTT allows the ships to develop realistic training by designing high-fidelity scenarios which inject actual signal information into the ship combat systems to emulate reality.On a battle force scale, the BFTT system can produce a synthetic theater where an entire fleet of ships and staffs (whether in-port or underway) can participate in a worldwide war-gaming exercise.Additionally,

By leveraging the BFTT scenario generation environment, replay is familiar to the operator in terms of map appearance, controls, and track features.It is also an extremely powerful learning tool, displaying both the ground truth and the perceived tracks from one or more exercise participants.[29]

Consequently, upon completion of the training scenario, individual units as well as the entire battle force can immediately conduct a review of the scenario events for each of the watch-teams and provide them with near-instant feedback on their overall performance.If desired, specific portions of the scenario where a watchstander(s) made a mistake could be replayed so that the person could correct the deficiency.

Although the objective of the BFTT system is considerably different from the ADC Simulation there are two aspects that the systems share in common.First, similar to BFTT, the ADC Simulation will allow the user to always see the ground truth for air contact identification along with the perceived identification of the aircraft.Second, the ADC Simulation will maintain a record log of all of the actions, inputs and outputs, and events for each watchstander as well as for the entire scenario so that event reconstruction can be performed.

i.��������� naval combat SIMULATION video games:the precursor to modern-day air-defense simulations

A review of research and system development in naval air-defense simulations, past and present, would not be complete without an examination of the video game industry�s recreational software programs that attempt to model actual naval operations and combat.Before the Navy began investing considerable funds into modeling and simulation, during the 1980s and early 1990s, the software gaming industry had programs that simulated naval combat in various location environments.First-person �shooter� games like Unreal Tournament�, Quake�, and Medal of Honor� have recently become templates for several military research projects into the creation of training programs for ground infantry troops.Conversely, many of these naval games were designed with the assistance of former naval officers who served as advisers on the, and in some cases, the games were so realistic that military (due to the absence of equivalent programs) used them to train personnel.The following games represent some of the most popular and realistic naval simulation games which contained a significant battle group air-defense component as part of the game engine.

1.�������� Strike Fleet:The Naval Task Force Simulator�

Making its debut in 1987, Strike Fleet� was one of the first and most successful video games that simulated naval battle group operations.The game arrived on the scene during the height of tensions between the United States and Iran in the Arabian Gulf and included familiar scenarios like oil tanker escort (through the Strait of Hormuz) and patrol/combat operations against the Iranian Navy.Another set of scenarios dealt with the British Navy during the Falklands Island conflict.Strike Fleet� also had an option which allowed the user to participate in a structured campaign (series of scenarios) against the Soviet Navy in the Atlantic Ocean and northern European theater.The game introduced a unique game interface that allowed the user to concentrate on high level battle group operations during a scenario, or the player could control the individual combat performance of ships and helicopters.This feature offered the players a considerable level of fidelity within the game, especially with respect to selection of task force size and composition (before the scenario commenced), control of the radar (range, active and passive modes), course, speed, and weapons employment (guns, short-range missiles, long-range missiles, torpedoes, CIWS, chaff, helicopter sonobuoys).Rich with detailed features and challenging opponents (who employed realistic tactics), Strike Fleet� provided the user with a realistic game that required careful strategic thinking to win.[30]

                                                                                                                     Figure 6.                     Strike Fleet� Video Game.

 

2.�������� Fifth Fleet�

Fifth Fleet �, the video game, was introduced in 1994 and immediately set a standard for the accurate depiction of naval operations and realistic game play for several reasons.First, the designers developed a different type of engine for a naval simulation and implemented a turn-based game, similar to a number of strategic board games and role-playing games.The movement of platforms (ships, aircraft, submarines) across a map divided into equal-sized hexagonal grid units was dictated by the speed of those platforms.Also the game more closely emulated actual naval operations by including such detailed features as smaller mission-oriented task groups, plans for coordinated air strikes and other aircraft missions, weather phenomena, and accurate logistical constraints (and consumption) which necessitated task force underway replenishment.Second, Fifth Fleet� was designed as if the player were the fleet commander; therefore, numerous features including automation of control of individual units and task forces (through artificial-intelligence programming) freed the user from becoming mired in repetitive, time-consuming actions.Third, the game offered the player a variety of realistic, mature, politically-charged scenarios, which occurred in the Arabian Gulf and Indian Ocean regions and involved military forces from nineteen different countries.

���

                                                                                                                       Figure 7.                     Fifth Fleet� Video Game.

 

Fifth Fleet� contained an impressive breadth and depth of realistic platforms f along with highly accurate representations of the weapon systems (missiles, torpedoes, guns, etc.).The game differentiated over one hundred different classes of surface ships and submarines and over sixty different types of aircraft.Lastly, with the rapid expansion of Internet capability during the early-to-mid 1990s, Fifth Fleet� introduced Internet game play for naval simulations, allowing users to move beyond simply playing against the computer and gave them the opportunity (and thrill) to challenge each other in the scenarios.[31]

3.�������� Harpoon:Modern Naval Combat Simulation� Series Video Games

The Harpoon� Series (Harpoon 1-4�) games have been arguably the most popular games of the naval combat simulation genre, and they have spanned nearly fourteen years, with Harpoon 1� published in 1989 and the most recent version (Harpoon 4�) arriving in March 2003.Although the game received wide acceptance during its first incarnation, the second version, Harpoon 2� (1994) became wildly popular (eventually becoming a video-gaming classic) when it introduced a level of realism for never before seen or since surpassed in a naval combat simulation.The Harpoon� game series engines were based on a realistic war-gaming and operational analysis model designed by the creator, Larry David, a former naval analyst and author.It featured exceptionally accurate representations of platforms, weather phenomena, weapon systems, geography, friendly and opponent tactics, as well as believable scenarios and campaigns based on current and future political and/or actual conflicts.

 

                                                                                                           Figure 8.                     Harpoon Series� Video Games[32].

 

The Harpoon� games approached the control of the units and fleet from the task-force commander level to allow the player to concentrate on the strategic and operational missions and capabilities of naval operations.To facilitate this concept, the game employed sophisticated artificial-intelligence engine (for both friendly and enemy combatants) to manage the behaviors and actions of those units realistically.The most recent incarnation Harpoon 4 � contains the following carefully detailed and accurately represented features:

                    A detailed Order of Battle with over 1,000 ships, subs, and aircraft from the United States, Soviet Union, United Kingdom, Canada, France, Netherlands, Norway, Sweden, Belgium, Finland, Denmark, China, Australia, and Japan.

                    A highly detailed map of the Northern European region created from satellite imagery.

                    A map display with a variety of overlays, including weapons, sensor and fuel ranges, as well as bathymetric, weather, cloud cover, threat zone, and satellite data.

                    A detailed tactical 3D environment where players can view their ships, aircraft, and submarines at critical events.

                    An extensive database of units, weapons, and sensors.

                    Accurate sensor and electronic countermeasures modeling.[33]

The game also includes capabilities for Internet online game-play against other people.The Harpoon� Series has been considered so accurate in its representation of modern naval operations that several nation�s militaries and military-affiliated organizations have used the game as part of their training, including the United States (United States Air Force Command and Staff College, U.S. Naval Institute), Australia (Australian Department of Defense), and Brazil (Brazilian Naval War College).[34]

4.�������� Summary

As discussed above, the video game industry has produced some very realistic, robust, and comprehensive naval simulation games that for many years, surpassing even some of the military�s best simulations.Originally designed for entertainment purposes, many of these programs were developed with very accurate models of naval operations, platforms, tactics, environments, and weapon systems and have only grown more accurate over the years.Consequently, the United States military has been one of the leading advocates and contractors for military (and military-relevant) game simulations to train its personnel, especially since the actual (live) training is usually extraordinarily expensive.

It is precisely because of this mission that the US Military is the world�s largest spender on and user of Digital Game-Based Learning.The military uses games to train soldiers, sailors, pilots, and tank drivers to master their expensive and sensitive equipment.It uses games to teach mid-level officers�how to employ joint force military doctrine in battle and other situations.It uses games to teach senior officers the art of strategy.[35]

The Harpoon � series of games has attained a level of accuracy so similar to actual naval operations that it has been used by the military.However, these games have not been extensively used by the military because, despite their accurate operational models, their primary purpose is for entertainment value.Subsequently, they lack many of the key features such as comprehensive logging of events for future analysis (among many others) needed to make them suitable and attractive for widespread employment. The capability to review the record following a simulation or training event to formulate lessons learned and discover potential areas for improvement is one of the paramount objectives for any type of training conducted by the military.Since the games often exclude these features, this limits their overall usefulness.

The ADC Simulation has much in common with these games because it attempts to simulate naval operations such as the air defense of the battle group.Some of the look, feel, and interactivity of the program�s interface was adopted from the strategic games as well as the capabilities to structure the simulation environment before commencement and modify the time compression/progression of the scenarios.However, ADC Simulation differs from the above video games because its overall objective is to train and provide insight for military personnel into the performance of battle group air defense with an eye towards understanding the mental processes of the involved watchstanders operates to gain experience (and lessons-learned), not entertainment value.

The wargamer [recreational user] wants a historically valid game, but also an enjoyable and entertaining experience; the military gamer wants a historically valid game, but both enjoyment and entertainment are secondary criteria�Generally, military games may be characterized by an extended learning period and an extending playing period � both of which combine to often prohibit the lessons learned because of time constraints.Thus, certain commercial wargames can offer lessons to the military professional.Such games offer playability, realistic lessons learned, and/or game aspects, which the military professional could adapt for his own games.[36]

We recognized that the inclusion of certain game-related features into the ADC Simulation would enhance the usability, playability, and satisfaction of the program experience and would improve the training-value of the utility.

j.�������� comparison AND contrast of the cruiser adc simulation program

Although there are many areas of commonality between the ADC Simulation and previous research, the simulation occupies a unique and relevant niche in the study and development of naval air defense for the following reasons, which support its usefulness reasons:

                    It focuses on the decision-making and other mental processes of the watchstanders as a function of the operational environment in which they operate.

                    It examines the performance of battle group air defense by studying performance of the air-defense watchstanders.

                    It delves into the role of the critical skills necessary for the performance of the watchstanders and explores the influence that a watchstander�s various proficiency levels has on the performance of the air-defense team.

                    It uses the data from research and interviews with air-defense experts to implement the capability to select various proficiency levels, experience levels, fatigue levels, and type of decision-maker psychology for each of the watchstanders in the simulation.

                    It allows the user to configure the external environmental attributes for the simulation (i.e. number of contacts, scenario threat level, weather, doctrine, probability and task time settings) to determine the effects of such changes on the performance of the watchstanders.

                    It allows the user to configure the CIC equipment operational-readiness attributes to determine the effects of such changes on the performance of the watchstander.

                    It allows the user to watch the performance of the air-defense team over an extended period of time (using time compression) so as to examine the positive actions and mistakes the watchstanders make concerning the identification of air contacts.The program will display ground-truth information so the user can always compare the actual situation to the perceived situation of the watchstanders.

                    It employs a Multi-Agent System architecture to simulate the watchstanders, which provides for a realistic reproduction of human behaviors within the simulation.

                    It allows the user to record into log files all of the actions, inputs, and outputs of each watchstander during a scenario for later analysis and review for performance anomalies or searches for chain-of-errors for incorrect air-defense identifications or engagements.

k.������� research questions posed for the cruiser adc simulation program

During the development of the ADC Simulation, we attempted to gain insight into the complex interactions and influences involved in air-defense operations to determine the degree to which individual watchstander performance (skill, experience, fatigue), equipment operational readiness, and the external environment (number of contacts, weather, etc.) affected the overall performance of the ADC team.Also, the effect the above factors cause on the performance of the individual watchstander was also explored.The following questions were posed:

                    What are the collective critical skills necessary for a CIC team to perform ADC duties/operations effectively?

                    What are the individual critical skills sets necessary for the primary ADC personnel to perform their responsibilities effectively?

                    How do you measure the collective proficiency and performance level of an individual ADC watchstander?

                    How do you measure the collective proficiency and performance level of an individual ADC watchstander?

                    What are the effects (positive and negative) of one CIC watchstander�s performance on another watchstation?

                    How does the decision-making type of the ADC team leadership (F-TAO, F-AAWC) affect the overall performance of the team?

                    How does the external environment affect the collective performance of the ADC team and the performance of the individual watchstanders?

                    What are the maximum effective performance limits of the ADC team, collectively and individually, when the maximum outer environment stress is experienced?

                    What influence or effect can degraded performance of critical air-defense equipment have on the performance of the ADC team, collectively and individually?

                    What influence or effect can degraded human performance due to fatigue have on the performance of the ADC team, collectively and individually?

While conducting interviews at the AEGIS Training & Readiness Center (ATRC) Detachment in San Diego and data collection from various sources, insight was gained into the understanding of some of the question.Many of the other questions required the completion of the ADC Simulation before they could be answered so that specific scenarios could be performed and parametric analysis conducted on the data/results.Once this analysis was completed, a survey, consisting of scenarios based on the results of the ADC Simulation tests, was given to the experts at the ADC to determine the simulation�s realism as compared to their professional air-defense experiences.

iII.��� User-Centered Design (UCD) process of the adc simulation human-computer interface (hci)

A.������� need for utilization of user-centered design (ucd) process in developing computer program interfaces

Almost everyone in the Navy has a story to tell about a particular piece of hardware or a computer program, which greatly frustrated them due to its difficulty to use.Despite the effectiveness or necessity of the equipment or software, usability issues that impeded the productivity of the user significantly hampered its utilization.This situation seemed to reach its apex in the 1980s and early 1990s as technological innovations such as computers transformed the workplaces on naval ships, submarines, bases, and squadrons.During this period, there were undoubtedly countless instances of systems with poor user interfaces that frequently translated into lost productivity and increased frustration by the �victims.�However, the USS Vincennes incident, which occurred in 1988, involving the engagement and shoot-down of an Iranian commercial airliner, highlights the potential negative impact that combat systems and programs with poor usability can contribute to an already dangerous and tense situation.Without recounting the entire situation (other sources provide a comprehensive accounting) or trivializing the other major factors involved in the incident, essentially the usability design of the Combat Information Center (CIC) consoles, were considered to have contributed negatively to the processing and dissemination of vital information to the key watchstanders.

Fortunately, starting in the mid-to-late 1990s, with the growth of the human-computer interface design community, the importance of engineering usability into combat and information systems has increased significantly.

The last decade of research and practice in user interface design has [created] some good models for designing user interfaces.Getting input from users early and continuously throughout the design process, using rapid prototyping and iterative design techniques, and conducting formal usability testing are now proven methods for assuring good user interfaces.[37]

Several principles govern the HCI community when designing effective interfaces with good usability.

                    Use Simple and Natural Dialogue

                    Speak the User�s Language (user knowledge, level of understanding)

                    Minimize User Memory Load

                    Ensure Consistency throughout the Interface (improve learnability)

                    Provide Feedback when Users Perform Actions (keep the user informed)

                    Provide Useful and Visible Shortcuts to Improve Usability

                    Provide Clear, Helpful Error Messages (plain language)

                    Prevent User-Initiated System Errors by Careful Design of the Interface[38]

To ensure usability, user satisfaction, and good productivity are attained in the program, the User-Centered Design (UCD) Process was used to develop the ADC Simulation interface.The following six phases make up the UCD Process and will be discussed in greater detail in the next sections:

                    Phase One:���� Creation of the Problem Statement

                    Phase Two:��� Conduct Requirements Gathering

                    Phase Three:�� Conceptual Design of the ADC Simulation

                    Phase Four:����� Implementation of the ADC Simulation Interface

                    Phase Five:������ Usability Analysis of ADC Simulation Interface

                    Phase Six:�������� Redesign/Modification of ADC Simulation Interface

B.������� ucd process phase one:problem statement

1. ������� Problem Statement

The goal of this multi-agent system is to develop an autonomous agent-based artificial intelligence simulation of an AEGIS cruiser performing Battle Group Air-Defense Commander duties.

2.������ Activity/Utility to Users

The resultant simulation will be used to gain insight and understanding into numerous factors that influence (positively or negatively) the effective performance of both the CIC ADC Team collectively and watchstation personnel individually.Additionally, the simulation will allow for the exploration of team and individual watchstation performance during abnormal or high intensity/stress simulations to determine the role of skill proficiency levels in the effective execution of ADC duties.Furthermore, this simulation will give naval war-fighters at the unit (ship) level the ability to experiment with various modifications to ADC tactical doctrine and organization to gain insight into potential effect of those changes on CIC team performance before implementing them.Lastly, this simulation will serve as a proof of concept to the usefulness of similar simulations in training of ship personnel on various team-oriented missions/duties and CIC operations.

3.������ Users

The potential users of this simulation will be training and doctrine-formulation commands, waterfront training teams, and individual combat units (ships).

4.������ Criteria for Judgment

The primary criteria for judgment will be the usefulness of the simulation to the potential users.This criterion includes the ease of setup, modification, and execution of the simulation for the desired output.

C.������� ucd process phase two:requirements gathering

1. ������� Needs Analysis

a.�������� Situation of Concern

Air Warfare is the most rapid, intense, and devastating type of warfare that the U.S. Navy currently trains for, and battle group operations are primarily focused on gaining proficiency in this mission area.Due to the fast-pace, uncertain, and dangerous aspects of air warfare, the battle group commander�s Air-Defense Team (the AEGIS cruiser CIC Team) must be trained extensively in the fundamental tenets of these operations to effectively protect the aircraft carrier, high-value units, and other naval ships in the vicinity.With the immense range of duties and responsibilities, there are multitudes of individual watchstation and collective skill sets that must be mastered to effectively perform the ADC duties.

b.�������� Need/Utility of System

(1)������� Current State.The waterfront training teams (AEGIS Training and Readiness Center (ATRC) detachments) are charged by the fleet type commanders with providing the Air-Defense Commander (ADC) training to the cruisers, and the quality of the training they provide is typically outstanding.However, the ADC operations are considerably complex, and the waterfront training teams are limited by the available training time as well as the scope of the training attempted.Furthermore, a myriad of interactions (watchstander to watchstander, ship to ship, ship to aircraft, watchstander to equipment, etc.) that are part of daily operations are numerous and potential ADC team performance deficiencies may not be noticed during the limited training periods.
(2)������� Need.The limitations of human comprehension of ADC operations due to the countless interactions places a barrier on the level, type, and quality of training that can be accomplished.Because there are many different variables to account for in these operations, the training teams and ships can only rely on their collective past experiences as the basis for producing effective training.This limits the potential gain of the training because training teams and ships are formulating ADC scenarios for the future based on experiences from the past.To surpass this limitation, both groups require a system that will enable them to build scenarios, based on the current skill and training levels of the ADC team as well as the environment they will face, that will allow them to train towards more realistic threats.
(3)������� Solution. The ADC simulation will provide a solution to the problems discussed above.After an initial assessment of the training, experience, and equipment readiness levels of a specific ship, the initial settings for the ADC team and environment can be inputted into the system.Upon completion of the setup, the program will allow the training teams (as well as the ships) to create simulations based upon the ship�s potential operational scenarios to discover the performance deficiencies.The training teams and ships will use the results from the simulation to provide more focused training in the areas where deficiencies were noted.Also, the program can be employed to validate the usefulness of future scenarios intended in the training of the ships.For the doctrine-formulation commands, this simulation will give them the opportunity to evaluate the validity of theoretical changes to ADC and AEGIS doctrine before implementing them in the fleet.

c.�������� Features of System

The ADC Simulation will allow for the observation and collection of data in three main categories, Individual Watchstander performance, CIC team performance, and Overall Simulation performance.

Individual Watchstanders:

 

Determine the effect of varying�

 

                    Skill levels on a single watchstander�s performance.

                    Experience levels on a single watchstander�s performance.

                    Type of decision-maker (F-TAO/F-AAWC) on a watchstander�s performance.

                    Fatigue levels on a single watchstander�s performance.

                    Equipment operational level on a single watchstander�s performance.

                    Contact density on a single watchstander�s performance.

                    Contact type (hostile, unknown, etc.) on a single watchstander�s performance.

                    Atmospheric conditions on a single watchstander�s performance.

                    Record watchstander decisions for post-simulation review (Log).

 

CIC Team:

 

Determine the effect of varying�

 

                    Skill levels on collective CIC team performance.

                    Experience levels on collective CIC team performance.

                    Type of decision-maker (F-TAO/F-AAWC) on collective CIC team performance.

                    Fatigue levels on collective CIC team performance.

                    Equipment operational level on collective CIC team performance.

                    Contact density on collective CIC team performance.

                    Contact type (hostile, unknown, etc.) on collective CIC team performance.

                    Atmospheric conditions on collective CIC team.

                    AEGIS doctrine on CIC team performance.

                    ADC Battle Doctrine on CIC team performance.

 

Simulation:

 

                    Run simulations over user-determined period of time (time compression available).

                    Allow user to view CIC team�s contact ID process (and engagement process if applicable).

                    Allow users to see errors made by CIC team as they happen.

                    Allow users to interact with the CIC team agents to view current decision logs and modify various attributes.

 

2.�������� User Analysis

a.�������� Utility of the Simulation

Since this program is designed to simulate ADC operations, the pool of users will probably be restricted to the following three groups:AEGIS waterfront training commands (ATRC detachments), AEGIS ships (ADC personnel), and AEGIS/ADC doctrine formulation commands.For them, there are two significant benefits of this simulation listed below:

                    The training commands and ships will employ the simulation to provide some foresight into the future performance of shipboard watch-teams under various scenarios.The information/results gained from running these simulations will assist them in providing more focused and effective training for these watch-teams.

                    The doctrine formulation commands will use the simulation to conduct evaluations on potentially new/theoretical AEGIS and ADC doctrine changes to provide some data on the performance of those modifications.This data could then be analyzed and reviewed before moving to the field-testing phase of the implementation.

b.�������� Collective Team Skills and Experience Required (User Characteristics)

Although a single user highly experienced in ADC, could effectively use the simulation, it is more likely that a team of users representing the various skills and watchstation backgrounds will be employed to initially set up and use the program.The following is a list of the qualifications, skills, and experience a team, which plans on using the simulation should possess:

 

                    Naval officers with 5 or more years of fleet experience

                    Senior enlisted personnel (E-6 and above) with 10 or more years of experience

                    All personnel familiar with Battle Group ADC operations

                    Personnel familiar with the performance/conduct of the following watchstations and their requisite skills:F-TAO, F-AAWC, S-TAO, S-AAWC, Red Crown, TIC, IDS, RSC, CSC, MSS, EWCO

                    Personnel familiar with carrier launch & recovery air operations

                    Personnel familiar with aircraft, flight intercept, and control operations

                    Personnel familiar with AEGIS Core Tactical Doctrine

                    Personnel who understand the basic operation of personal computers including Windows programs

c.�������� Frequency of Simulation Use

The simulation program usage will probably vary depending upon where a ship is in the training/work-up cycle.If it is somewhere in the middle of the training cycle, it will probably be used (by the waterfront training teams and ships) fairly often (3-5 times a week) to provide information to guide the ship�s training plan.However, if the ship has completed the training cycle and is deployed, it may be used less frequently (1-2 times a month).

3.�������� Task Analysis

During the preliminary design of the ADC Simulation interface, four primary tasks were identified along with several associated subtasks for each task.

 

Primary Task 1: Input Watchstander Attributes:

����� Subtask 1.A:Set Skill levels

Subtask 1.B:Set Experience levels

Subtask 1.C:Set Fatigue levels

Subtask 1.D:Set Decision-maker Type levels

 

Primary Task 2:Input Equipment Setup:

����� Subtask 2.A:Set Equipment Readiness levels

����� Subtask 2.B:Input Equipment Setup (Radar, Data Links)

 

Primary Task 3:Input Scenario Setup

����� Subtask 3.A:Set Atmospheric Conditions

����� Subtask 3.B:Set Contact Density

����� Subtask 3.C:Set Scenario Threat level

 

Primary Task 4:Input Doctrine Setup

Subtask 4.A:Set ADC Battle Doctrine

Subtask 4.B:Set AEGIS Doctrine

 

Although many of these subtasks tasks are listed individually, it is very likely that upon implementation some of these tasks will be centralized into one interface window.For example, the subtasks in Primary Task #1 could be combined into one input window for each watchstander to simplify the interface (increase the ease-of-use) for the user.

d.������� ucd process phase three:conceptual design of adc simulation program

1.�������� Conceptual Design Introduction

Phase Three of the UCD Process commenced the actual definition and categorization of the critical components that comprised the ADC Simulation and was completed in the following four steps.First, the team conducted comprehensive interviews with experienced air-defense Subject Matter Experts (SME) from the AEGIS Training and Readiness Center (ATRC) Detachment in San Diego California and the Fleet Technical Support Center Pacific (FTSCPAC) to collect data about battle group air-defense operations onboard an AEGIS cruiser.These personnel possessed between five to fifteen years of naval air-defense experience, and all of them were considered experts in this field.The interviews covered the following topics:

                    Air-Defense Identification & Threat Assessment Process

                    Battle Group Air Defense & Aircraft Operations

                    Collective Skills Required for Effective ADC Team Performance

                    Individual Watchstander Skills Required for Effective Performance

                    Differences Between Skill & Experience

                    Measures of Effectiveness (Successful Task Performance) of Skill

                    Measures of Effectiveness of Experience

                    Affect of Fatigue on Individual Watchstander Performance

                    Affect of Fatigue collectively on ADC Team Performance

                    Affect of Individual Watchstander Performance on ADC Team Performance

                    Affect of CIC Equipment Readiness on Individual and ADC Team Performance

                    Affect of External Environment on Individual and ADC Team Performance

                    Classification of Different Types of Decision-makers for F-TAO & F-AAWC

                    Affect of Different Types of Decision-makers on ADC Team Performance

                    Affect of Watchstander Mistakes on ADC Team Performance

                    Affect of Different Levels of Individual Watchstander Skill & Experience Proficiencies on ADC Team Performance

                    Classifying Different Levels of Skill, Experience, and Fatigue

Following this research collection effort, the Subject Matter Experts� data was analyzed and used to develop the conceptual foundation and structure for the design of the simulation.

Second, the fundamental components of the simulation were determined, which in this case were the agents (watchstanders) and the objects (various items in both the interface and the simulation itself).Upon completion of this step, the attributes of the agents and the objects were ascertained and listed.Third, the relationships between each agent and the other agents and objects in the simulation was explicitly defined with the same task performed for each object.Lastly, utilizing the information from determining the relationship among agents and objects, all of the actions (for each agent and object) were defined.When this process was finished, the team had generated a well-defined, comprehensive, high-level view of the interrelationships, interactions, and processes that would occur in the ADC Simulation, which simplified the development of the actual prototype discussed in Section D, UCD Process Phase Four and helped to reduce a number of potential user-interface errors.Following is the conceptual design of the simulation that was used to produce the first program interface (Section E).After the Phase Five Usability Analysis was completed, minor adjustments were made to the conceptual design, which were reflected in the subsequent program interface displayed in Section G.

2.�������� Conceptual Design

a.�������� Agents

                    Force Tactical Action Officer (F-TAO)

                    Ship Tactical Action Officer (TAO)

                    Force Anti-Air Warfare Coordinator (F-AAWC)

                    Ship Anti-Air Warfare Coordinator (AAWC)

                    Combat Systems Coordinator (CSC)

                    Radar Systems Coordinator (RSC)

                    Missile Systems Supervisor (MSS)

                    Red Crown Watchstander (RC)

                    Electronic Warfare Control Officer (EWCO)

                    Tactical Information Coordinator (TIC)

                    Identification Supervisor (IDS)

b.�������� Objects

                    Simulation Scenario

                    Simulation Interface:Shortcut Control Buttons

                    Simulation Interface:Tactical Display

                    Simulation Interface:Tactical Display Contact Icons(Air, Surface)

                    Simulation Interface:Contact Display

                    Simulation Interface:CIC Agent Display

                    Simulation Interface:CIC Agent Display Icons

                    Simulation Interface:Agent Attributes Display

                    Simulation Interface:Menu Bar

                    Simulation Interface:CIC Equipment Display Icons

                    Simulation Interface:CIC Equipment Pop-up Menu

                    Simulation Interface:Contact Pop-up Menu

                    CIC Equipment (various types)

                    Simulation Interface:Agent Pop-up Menu

                    Agent Decision History Log (one for each agent)

                    Equipment Status Log (one for each piece of equipment)

                    Scenario Event Log (one for each scenario executed)

                    Contacts (Air, Surface)

 

 

 

 

 

c.�������� Necessary Attributes of Agents

Decision/Psych Profile:������������� Type:

 

Attribute Sets

Proficiency Levels for Attributes

Type of Decision-Maker

Aggressive, Balanced, Reserved

Situation Assessment

Expert, Experienced, Basic

Tactical Situation Maintenance

Expert, Experienced, Basic

Communication

Expert, Experienced, Basic

Information Management

Expert, Experienced, Basic

AD Battle Doctrine

Expert, Experienced, Basic

Combat Leadership

Expert, Experienced, Basic

Platform Knowledge

Expert, Experienced, Basic

Watch Experience Level

Expert, Experienced, Newly Qualified

Fatigue Level

Rested/Alert, Tired, Exhausted

 

                                                                                                                                                    Table 1.          F-TAO.

 

Attribute Sets

Proficiency Levels for Attributes

Situation Assessment

Expert, Experienced, Basic

Tactical Situation Maintenance

Expert, Experienced, Basic

Communication

Expert, Experienced, Basic

Information Management

Expert, Experienced, Basic

AD Battle Doctrine

Expert, Experienced, Basic

Combat Leadership

Expert, Experienced, Basic

Platform Knowledge

Expert, Experienced, Basic

Watch Experience Level

Expert, Experienced, Newly Qualified

Fatigue Level

Rested/Alert, Tired, Exhausted

 

                                                                                                                                                        Table 2.          TAO.

 

Decision/Psych Profile:������������� Level:

 

Attribute Sets

Proficiency Levels for Attributes

Type of Decision-Maker

Aggressive, Balanced, Reserved

Situation Assessment

Expert, Experienced, Basic

Tactical Situation Maintenance

Expert, Experienced, Basic

Communication

Expert, Experienced, Basic

Information Management

Expert, Experienced, Basic

AD Battle Doctrine

Expert, Experienced, Basic

Combat Leadership

Expert, Experienced, Basic

Platform Knowledge

Expert, Experienced, Basic

Watch Experience Level

Expert, Experienced, Newly Qualified

Fatigue Level

Rested/Alert, Tired, Exhausted

 

                                                                                                                                                Table 3.          F-AAWC.

 

Attribute Sets

Proficiency Levels for Attributes

Situation Assessment

Expert, Experienced, Basic

Tactical Situation Maintenance

Expert, Experienced, Basic

Communication

Expert, Experienced, Basic

Information Management

Expert, Experienced, Basic

AD Battle Doctrine

Expert, Experienced, Basic

Combat Leadership

Expert, Experienced, Basic

Platform Knowledge

Expert, Experienced, Basic

Watch Experience Level

Expert, Experienced, Newly Qualified

Fatigue Level

Rested/Alert, Tired, Exhausted

 

                                                                                                                                                    Table 4.          AAWC.

 

Attribute Sets

Proficiency Levels for Attributes

Situation Assessment

Expert, Experienced, Basic

Tactical Situation Maintenance

Expert, Experienced, Basic

Communication

Expert, Experienced, Basic

Information Management

Expert, Experienced, Basic

Systems Troubleshooting

Expert, Experienced, Basic

AEGIS Doctrine Employment

Expert, Experienced, Basic

Platform Knowledge

Expert, Experienced, Basic

Watch Experience Level

Expert, Experienced, Newly Qualified

Fatigue Level

Rested/Alert, Tired, Exhausted

 

                                                                                                                                                        Table 5.          CSC.

 

Attribute Sets

Proficiency Levels for Attributes

Situation Assessment

Expert, Experienced, Basic

Tactical Situation Maintenance

Expert, Experienced, Basic

Radar EM Fundamentals

Expert, Experienced, Basic

Atmospheric/Environmental

Expert, Experienced, Basic

Radar Sensitivity Calibration

Expert, Experienced, Basic

Radar Power Level Calibration

Expert, Experienced, Basic

Radar System Troubleshooting

Expert, Experienced, Basic

Communication

Expert, Experienced, Basic

Radar Jamming Evaluation

Expert, Experienced, Basic

Radar Land/Sea Interface Cal.

Expert, Experienced, Basic

AEGIS Core Doctrine

Expert, Experienced, Basic

Watch Experience Level

Expert, Experienced, Newly Qualified

Fatigue Level

Rested/Alert, Tired, Exhausted

 

                                                                                                                                                        Table 6.          RSC.

 

 

 

 

 

 

Attribute Sets

Proficiency Levels for Attributes

Missile Systems Employment

Expert, Experienced, Basic

Situation Assessment

Expert, Experienced, Basic

CIWS Employment

Expert, Experienced, Basic

Missile/CIWS Troubleshooting

Expert, Experienced, Basic

Communication

Expert, Experienced, Basic

Watch Experience Level

Expert, Experienced, Newly Qualified

Fatigue Level

Rested/Alert, Tired, Exhausted

 

                                                                                                                                                        Table 7.          MSS.

 

Attribute Sets

Proficiency Levels for Attributes

Communication

Expert, Experienced, Basic

Aircraft Control

Expert, Experienced, Basic

Carrier Operations

Expert, Experienced, Basic

IFF System Operation

Expert, Experienced, Basic

Watch Experience Level

Expert, Experienced, Newly Qualified

Fatigue Level

Rested/Alert, Tired, Exhausted

 

                                                                                                                                              Table 8.          Red Crown.

 

Attribute Sets

Proficiency Levels for Attributes

Situation Assessment

Expert, Experienced, Basic

Tactical Situation Maintenance

Expert, Experienced, Basic

Radar EM Fundamentals

Expert, Experienced, Basic

Atmospheric/Environmental

Expert, Experienced, Basic

ES Equipment Operation

Expert, Experienced, Basic

ES Analysis/Classification

Expert, Experienced, Basic

Equipment Troubleshooting

Expert, Experienced, Basic

Communications

Expert, Experienced, Basic

Watch Experience Level

Expert, Experienced, Newly Qualified

Fatigue Level

Rested/Alert, Tired, Exhausted

 

                                                                                                                                                    Table 9.          EWCO.

 

Attribute Sets

Proficiency Levels for Attributes

Link Equipment Operation

Expert, Experienced, Basic

B.G. Link Equip Knowledge

Expert, Experienced, Basic

Link Communication

Expert, Experienced, Basic

Link Coordination

Expert, Experienced, Basic

Link Resolution

Expert, Experienced, Basic

Watch Experience Level

Expert, Experienced, Newly Qualified

Fatigue Level

Rested/Alert, Tired, Exhausted

 

                                                                                                                                                       Table 10.        TIC.

Attribute Sets

Proficiency Levels for Attributes

Information Input

Expert, Experienced Basic

IFF Challenge

Expert, Experienced Basic

Query & Warning Evaluation

Expert, Experienced Basic

Communications

Expert, Experienced Basic

Watch Experience Level

Expert, Experienced, Newly Qualified

Fatigue Level

Rested/Alert, Tired, Exhausted

 

                                                                                                                                                       Table 11.        IDS.

 

d.�������� Necessary Attributes of Objects

(1)������� Simulation Scenario.

                    Atmospheric Conditions (weather conditions, temperature)

                    Contact Density

                    Scenario Threat Level

                    Contact Arrival Rate

                    Hostile/Unknown Contact Aggressiveness Level

                    AEGIS Doctrine

                    Air-Defense Doctrine

 

(2)������� Simulation Interface:Shortcut Control Buttons Display.

                    Start/Continue Simulation Button

                    Pause Simulation Button

                    Stop Simulation Button

                    Increase Time Compression

                    Decrease Time Compression

 

(3)������� Simulation Interface:Tactical Display.

                    Air, Surface Contacts (clickable)

 

(4)������� Simulation Interface:Tactical Display Contact Icons.

                    Contact Attributes (specific to the contact)

 

 

 

 

 

(5)������� Simulation Interface:Contact Data Display.

                    Contact Data Display Window

                    Data for this display via left mouse button click on a Tactical Display Contact Icon

 

(6)������� Simulation Interface:CIC Agent Display

                    CIC Agent Icons (clickable)

                    CIC Equipment Icons (clickable)

 

(7)������� Simulation Interface:CIC Agent Display Icons (Agents).

                    Agent Attributes (specific to the agent)

 

(8)������� Simulation Interface:Agent Attributes Display.

                    Agent Attribute Display Window

                    Data for this display via left mouse button click on an Agent Display Icon

 

(9)������� Simulation Interface:Menu Bar

                    Scenario Utilities

                    Watchstander Attributes

                    CIC Equipment Setup

                    Scenario External Attributes

                    Doctrine Setup

 

(10)����� Simulation Interface:CIC Equipment Display Icons (Equipment)

 

                    Equipment Status (specific to the equipment)

 

(11)����� Simulation Interface:Agent Pop-up Menu (mouse right button click)

 

                    Display Agent Decision History Log

                    Modify Agent Attributes

 

 

(12)����� Simulation Interface:CIC Equipment Pop-up Menu (mouse right button click)

 

                    Display CIC Equipment History Log

                    Modify CIC Equipment Attributes/Status

 

(13)����� Simulation Interface:Contact Pop-up Menu (mouse right button click)

 

                    Modify Contact Type/Attributes

 

(14)����� CIC Equipment (various types)

                    Types of Equipment

                    SPY-1B Radar

                    Link 16 Tactical Data System

                    Link 11 Tactical Data System

                    SLQ-32 System, OJ-451 CIC Consoles

                    Readiness Levels (for each type)

                    Fully Operational

                    Partially Degraded

                    Severely degraded

                    Non-operational

 

(15)����� Agent Decision History Log (one for each agent)

                    History Log

 

(16)����� Equipment Status Log

                    Status/History Log

 

(17)����� Scenario Event Log

                    Log of major events in scenario

 

 

 

 

 

 

(18)����� Contacts

                    ***CIC-perceived/assigned Data***

                    Contact # - The simulation assigned index number for the contact

                    Track # - The CIC/Agent assigned index number

                    Classification - Hostile, Suspect, Unknown, Neutral, Friendly

                    Speed - Measured in Nautical miles per hour

                    Course - Measured in degrees true (0-359)

                    Bearing - Measured in degrees true (0-359)

                    Altitude - Measured in feet above sea level

                    ES Emissions - specific electronic equipment signal emissions

                    Type of Contact - (air, surface)

                    Specific platform - (Mig-27, F-14, patrol boat, destroyer)

                    ***Actual Data***

                    Contact # - The simulation assigned index number for the contact

                    Track # - The CIC/Agent assigned index number

                    Classification - Hostile, Suspect, Unknown, Neutral, Friendly

                    Speed - Measured in Nautical miles per hour

                    Course - Measured in degrees true (0-359)

                    Bearing - Measured in degrees true (0-359)

                    Altitude - Measured in feet above sea level

                    ES Emissions - specific electronic equipment signal emissions

                    Type of Contact - (air, surface)

                    Specific platform - (Mig-27, F-14, patrol boat, destroyer)

e.�������� Agent Relationship

                    Each agent has a set of watchstander attributes that can be set/modified in the Watchstander Attributes menu.

                    Each agent has a one-to-one relation with the Agent Icons in the Agent Attribute Display and CIC Agent Display.

                    Each agent has a one-to-one relation with one Decision History Log.

                    Each agent has a zero-to-many relation with contacts (processing contacts).

                    Each agent�s decision history log has one associated pop-up menu to display the log.

                    Each agent has a one-to-many relation with the CIC equipment.

                    Each agent has a one-to-many relation with other agents (communications).

                    Each agent has a one-to-one relation with an associated pop-up menu.

                    Each Simulation Scenario contains a set of CIC watchstander agents.

f.��������� Object Relationships

                    Each contact has a one-to-one relation with the Contact Data Display.

                    Each contact has a one-to-one relation with the Tactical Display.

                    Each contact has a one-to-many relation with agents (processed by agents).

                    Each contact has a one-to-one relation with the Tactical Display Icons.

                    Each contact has a one-to-many relation with CIC equipment (processed by equipment).

                    Each piece of CIC equipment has a one-to-one relation with CIC Agent Display.

                    Each piece of CIC equipment has a one-to-one relation with a CIC Agent Display Equipment Icon.

                    Each piece of CIC equipment has a one-to-one relation with the Equipment Status Log.

                    Each Equipment Status Log has a one-to-one relation with an associated pop-up menu.

                    Scenario Log has a one-to-one relation with an associated scenario.

                    Each Simulation Scenario Object contains the following objects:

                    Shortcut Control Buttons Display

                    Tactical Display

                    Contact Data Display

                    CIC Agent Display

                    Agent Attributes Display

                    Menu Bar

                    CIC Agent Display:Equipment Icons (one for each watchstation)

                    CIC Agent Display:Agent Icons (one for each agent)

                    Tactical Display:Contact Icons (one for each contact object)

                    Agent Decision Log (one for each agent)

                    CIC Equipment Status Log (one for each piece of equipment)

                    Scenario Event Log (one for each Scenario executed)

                    Contacts (multiple numbers)

                    Agent Pop-up Menu (associated with a selected CIC agent)

                    Contact Pop-up Menu (associated with a selected contact)

                    CIC Equipment Pop-up Menu (associated with a specific piece of equipment)

g.�������� Actions on Agents and Objects

                    Agents:Change Attributes

                    Shortcut Control Button Display:Select/Deselect Start/Continue Sim., Pause Sim., Stop Sim., Increase Time Compression, Decrease Time Compression buttons

                    Tactical Display:Display, Move, & Delete Contact(s)

                    Agent Attributes Display:Display Agent Attribute Data

                    Agent Pop-up Menu:Display Agent Decision History Log, Change Agent Attributes

                    CIC Equipment Pop-up Menu:Change Setup, Display CIC Equipment Status Log

                    CIC Equipment:Change equipment readiness & setup (via CIC Equipment Pop-up Menu)

                    Contact Pop-up Menu:Display/Change Contact Attributes

                    Contacts:Change Attributes (via Contact Pop-up Menu)

                    Menu Bar:Open, Close, Create, Save, Start/Continue, Pause, Stop Scenarios, Increase/Decrease Scenario Time Compression

                    Agent Decision History Log:Append Decision Data, Delete Log

                    CIC Equipment Status Log:Append Status Data, Delete Log

                    Scenario Event Log:Append Scenario Event Data, Delete Log

3.�������� Visual Design

During the Phase Three development of the ADC Simulation, preliminary pencil sketches of the simulation interface were designed.These sketch designs covered the initial interfaces for the main simulation interface window as well as all of the expected menus and pop-up input menus that would appear from the selection of menu items.�The idea here is to get something visible early.Sketches, of both screens and of task flows, are useful as a first step for getting quick feedback.�[39]These designs were presented to personnel experienced with combat information center air-defense operations to collect feedback early in the simulation development process.Using pencil drawings implies to the reviewers that the interface design is still in the preliminary stages and thus constructive comments are more easily obtained (preventing resistance due to fear of insulting the developer).Six sketch designs were created with one listed below, and the other five drawings are displayed in Appendix A.The results of the initial design sketches are provided in the following section.

 

                                                                       Figure 9.                     Preliminary Conceptual Sketches of ADC Simulation GUI.

 

4.�������� Early Analysis

Two experienced reviewers were selected to review the preliminary sketches and provide feedback regarding the effectiveness of the ADC Simulation interface design.Their comments are listed below.

 

 

a.�������� Reviewer #1 Comments

(1)������� Recommendations.

                    Tactical Display:Highlight a contact when it has been selected with the mouse.

                    CIC Watchstander Display:Highlight agent or watchstation/equipment when it has been selected with the mouse.

                    Since the Tactical Display is the centerpiece of the simulation, prevent pop-up windows from displaying on top of it.

                    Employ auditory cues to alert the user when unusual or anomalous events occur (i.e. misidentified contact, cruiser shoots a missile) to ensure the user�s attention is focused on the associated situation.

                    Implement a sub-window at the bottom of the simulation screen that displays the top three most important contacts of interest.

                    Use tool-tips to display the contact�s track number and actual (as opposed to the CIC perceived) basic data such as altitude, speed, and course.

(2)������� Comments on Recommendations.

                    Will implement.

                    Will implement.

                    Will attempt to implement.Controlling the location of a pop-up window is not always possible.

                    Will implement.

                    Most likely will not implement in this form.The symbols used in the simulation distinguish among hostile, friendly, and unknown contacts as well as whether they are surface or air contacts.Additionally, the hostile, unknown, and red contact types will be classified by color, red, yellow, and blue, respectively.If the recommended feature is implemented, it will probably be used to list the hostile contacts for ease of reference.

                    Will implement.

b.�������� Reviewer #2

(1)������� Recommendations.

                    If appropriate, change the �Scenario Utilities� menu name to �File� since most of the menu�s actions are similar to options found in most Microsoft �File� menus.This will enhance the understandability and learnability of the simulation.

                    Shortcut Control Button Display:Add a display to show the current time compression ratio.

                    When the simulation program is initially loaded, implement a default setting for all of the watchstander attributes, CIC equipment settings, scenario external attributes, and doctrine setup so the user can run the program immediately.(Currently, the design is for the user to manually configure all of these features before running the simulation, or they will receive an error prompting them to complete the task.)

                    Use the Java Help Set API to organize help information throughout the simulation.

(2)������� Comments on Recommendations.

                    Will take this under consideration.Although this is an appealing recommendation, there are some options under that menu which did not lend themselves to the typical �File� menu actions.

                    Will implement with one possible modification.The �Time Compression Ratio� display may be placed underneath the simulation �time elapsed� display to allow the user one place to look for time-related information.

                    Will implement with two modifications.First, upon attempting to run the simulation for the first time (with the default settings), the user will be asked whether they he or she would like to re-configure the scenario settings.The second modification will be to allow the user to use a setup wizard to configure the scenario to desired settings.

                    Will implement.This recommendation originated from a discussion of involving the implementation of a �Help/Amplification� button on the Watchstander Attributes, CIC Equipment Setup, Scenario External Attributes, and Doctrine Setup menu-item pop-up input windows to provide the user some amplifying information concerning the various setting options (i.e. Basic, Experienced, Expert).We intended to include this capability in the simulation, and Reviewer #2 recommended that this capability could be further organized in a larger simulation �Help Feature� utilizing the Java Help Set API.

e.�������� ucd process phase four:adc simulation interface implementation

During Phase Four, we used the design sketches to implement a working prototype of the ADC Simulation interface.The prototype was developed using the Java language, and was a key initial component in the building of the entire simulation program.

 

                                                 Figure 10.                   Early Implementation of ADC Simulation GUI before Usability Analysis.

 

f��������� ucd process phase five:usability analysis of adc SIMULATION interface

1.�������� Usability Analysis Introduction

The Usability Analysis phase is an essential (and often ignored) portion of a software or hardware system evaluation and, as discussed earlier, can often lead to more profound problems later for the users of the system.A working, interactive prototype of the ADC Simulation interface was developed (see t Figure 3) for the evaluation of the system for several reasons.

[First,] building the prototype forces critical thinking about details of the interface, brining to the surface issues that are not obvious when looking at static screens.[Second,] live demos of the prototype are important for getting buy-in for your design�[Third,] the prototype can also, of course, provide valuable usability data to feed the iterative design process.Finally, the prototype itself . . . becomes part of your user interface requirements.[40]

Subject Matter Experts (SME) from the AEGIS Training and Readiness Center (ATRC) Detachment in San Diego, California, were selected to evaluate the ADC Simulation interface thoroughly.

Before the evaluation occurred, we generated a comprehensive list of common tasks that a potential user of the system would need to perform to test the usability of the interface, and these tasks were then evaluated by the team.This pre-test was conducted to calculate preliminary performance data that was used to construct the task list data-recording sheet.Two types of task attributes of the evaluators were analyzed:(1) initial performance of certain tasks and (2) the learnability of the system.The Initial Performance attribute tested the evaluators ability to perform a task based on the intuitiveness and comparative familiarity of the interface with other previously experienced (and possibly similar interfaces) and with generally �seems like the logical action to take� to complete the task.The Learnability attribute examined the level of ease or difficulty required to learn how to use the interface.This attribute was measured by prompting the evaluators to perform certain tasks either similar in some manner or identical to previously performed tasks in earlier in the session.To capture the performance of the evaluators, two metrics were selected when they conducted a task, (1) total time to complete the task and (2) number of errors while performing the task.Prior to commencing a trial, each SME was informed the objective of the evaluation was to test the overall usability of the ADC Simulation and that their performance (whether positive or negative) was indicative of the system�s �user-friendliness� (or lack thereof), not a measure of their personal skill or intelligence.This statement was given primarily to set the evaluator at ease so they would provide the maximum amount of feedback concerning the interface.

2.�������� Task List Overview

The following set of tasks was part of the evaluation of the simulation.A majority of the tasks require the user to set various attributes of the simulation program via either the menu bar options or icons and buttons in the GUI.These tasks are representative of the majority of the tasks that will be performed by the user when running the fully operational simulation.Following the SME�s evaluation of the ADC Simulation, they were given a survey to rate the usability of the interface and were provided a post-testing feedback session to discuss the design of the interface with the developer.The results of the usability analysis are listed below and in Appendix B (detailed data and comparison charts/graphs).

TASK #

TASK NAME

1

Open Scenario Menu

2

Open Watchstander Attributes Menu

3

Open CIC Equipment Setup Menu

4

Open Scenario Doctrine Setup Menu

5

Open Scenario External Attributes Menu

6

Open Simulation Logs Menu

7

Change the Maximum time it takes a Watchstander to complete a Task

8

Select a contact to display data in the Contact Data Display Window

9

Select the F-TAO watchstander to display data in the Agent Attributes Window

10

Open a contact�s Pop-up Options Window

11

Open the F-TAO Pop-up Options Window

12

Increase the Time Compression of the Simulation

13

Pause the Simulation

14

Pause the Simulation (Alternate method)

15

Set the Situation Assessment Skill Level to Expert for the Force TAO (F-TAO)

16

Set the Fatigue Level to Exhausted for the RSC

17

Set the SPY-1B Radar Equipment Readiness Level to Non-Operational

18

Set the ADC Doctrine Query Range to 30 NM & Warning Range to 20 NM

19

Set the Scenario Threat Level to Red

20

Open the Scenario Event Log

21

Open the SLQ-32 System Status Log

22

Set the Performance Probabilities Watchstander Fatigue levels to (0.5, 0.7, 0.9)

23

Change the Maximum time for the F-TAO Watchstander to complete a task

24

Change the speed of the Hostile Air contact to 500 KTS

25

Change the F-AAWC Experience Attribute to Expert

26

Change the Link Equipment Status to Partially Degraded

 

                                                                                                                                           Table 12.        List of Tasks

 

3.�������� Subject Profile

The subjects for this study came from the AEGIS Training & Readiness Center (ATRC) Detachment, San Diego, CA.The evaluation of the AEGIS Cruiser Combat Information Center (CIC) Air-Defense Simulation was conducted on 12-13 September 2002 at the ATRC Detachment.The subjects� air-defense experience ranged between 10 to 20 years, and their ranks spanned E-7 (Chief Petty Officer) to O-3 (Lieutenant).

                    Subject #1:Chief Petty Officer/E-7 (Operations Specialist)

                    Subject #2:Senior Chief Petty Officer/E-8 (Operations Specialist)

                    Subject #3:Senior Chief Petty Officer/E-8 (Fire Control Technician)

                    Subject #4:Chief Petty Officer/E-7 (Operations Specialist)

                    Subject #5:Lieutenant/O-3 (Surface Warfare Officer/Prior Enlisted)

4.�������� Data Collection

A set of twenty-six tasks was formulated as part of the evaluation of the CIC Air-Defense Simulation GUI.These tasks ensured the subjects interacted with all of the major aspects of the simulation to collect a comprehensive set of data about user performance.Two performance metrics were recorded during the evaluation process:number of errors committed while performing the task and total time to successfully complete the task.

TASK #

USABILITY ATTRIBUTE

VALUE TO MEASURE

1

Initial Performance

# of Errors

Length of time to successfully complete task

2

Initial Performance

Length of time to successfully complete task

3

Initial Performance

Length of time to successfully complete task

4

Initial Performance

# of Errors

5

Initial Performance

Length of time to successfully complete task

6

Initial Performance

# of Errors

7

Initial Performance

# of Errors

Length of time to successfully complete task

8

Initial Performance

Length of time to successfully complete task

9

Initial Performance

# of Errors

10

Initial Performance

# of Errors

11

Initial Performance

Length of time to successfully complete task

12

Initial Performance

# of Errors

Length of time to successfully complete task

13

Initial Performance

# of Errors

Length of time to successfully complete task

14

Initial Performance

# of Errors

Length of time to successfully complete task

15

Learnability

Length of time to successfully complete task

16

Learnability

# of Errors

17

Learnability

# of Errors

18

Learnability

Length of time to successfully complete task

19

Learnability

Length of time to successfully complete task

20

Learnability

# of Errors

21

Learnability

Length of time to successfully complete task

22

Learnability

# of Errors

Length of time to successfully complete task

23

Learnability

# of Errors

24

Learnability

# of Errors

Length of time to successfully complete task

25

Learnability

# of Errors

Length of time to successfully complete task

26

Learnability

# of Errors

Length of time to successfully complete task

 

                                                                                                                   Table 13.        Usability Analysis Attributes.

5.�������� Analysis of Task Data

For each task, either the primary and secondary measurement values or two primary measurement values are provided.In the case of the former set of measurement values, the primary value has the best case, worst case, and target level for the measurements included with the average value for that measurement.For the latter set of measurement values, they both include the best cases, worst cases, and target levels for those measurements.Following each summary table are comment blocks for noteworthy errors and memorability/learnability issues that were encountered during the evaluations.The best case, worst case, and target levels for number of errors and times to complete tasks were determined during the initial development of the task list.Listed in Appendix B is a summary breakdown of the key data collected from the five subjects� evaluations of each task they were requested to perform.

6.�������� Analysis of Subject Evaluation Surveys

After each session, the subject was given a survey to record his evaluation of the usability of the simulation.Based on the results of the surveys, the subjects generally evaluated the simulation�s interface favorably.The survey was divided into the following four categories:

                    Screen Layout

                    Overall Display Layout relative for menu-bars and pop-up menus

                    Menu Location & Wording

                    Task Completion

a.�������� Screen Layout�����������

The average survey scores ranged between 3.8 and 4.4 out of a scale of 5, which indicated the subjects generally felt the simulation�s Screen Layout was between �Acceptable� and �Best Possible.�The individual subjects� breakouts are displayed in Figure 18, but out of a total of forty possible selections for this category (eight per subject for 5 subjects), thirteen (13) were rated with a score of 5.0, sixteen (16) with a score of 4.0, and eleven (11) with a score of 3.0.There were no areas rated below 3.0.

b.�������� Overall Display Layout Relative for Menu-Bars and Pop-Up Menus

The average survey scores ranged between 4.0 and 4.4 out of a scale of 5, which indicated the subjects generally felt the simulation�s Overall Display Layout was near �Best Possible.�The individual subjects� breakouts are displayed in Figure 20, but out of a total forty possible selections for this category (eight per subject � 5 subjects), fifteen (15) were rated with a score of 5.0, fifteen (15) were rated with a score of 4.0, and ten (10) with a score of 3.0.There were no areas rated below 3.0.

c.�������� Menu Location and Wording

The average survey scores were 3.8 out of a scale of 5, which indicated the subjects generally felt the simulation�s Menu Location & Wording was between �Acceptable� and �Best Possible�, but closer to the �Acceptable� middle value.The individual subjects� breakouts are displayed in Figure 22, but out of a total fifteen possible selections for this category (three per subject for 5 subjects), six (6) were rated with a score of 5.0 and nine (9) with a score of 3.0.There were no areas rated below 3.0.The lower scores in this category are possibly due to some of the difficulty a couple of the subjects encountered when trying to perform tasks involving the selection of menus (regular & pop-ups) that were not intuitive for them.Details for some of these difficulties were discussed in the �Analysis of Task Data� and �Overall Simulation Analysis� sections in Appendix B, and their remedies provided in the �Recommendations� section below.

d.�������� Ease of Performance of the Task Completion List

The average survey scores ranged between 3.6 and 4.2 out of a scale of 5, which indicated the subjects generally felt the ease of performance of the simulation�s Task Completion list was between �Acceptable� and �Best Possible.�The individual subjects� breakouts are displayed in Figure 24, but out of a total twenty-five possible selections for this category (five per subject for 5 subjects), eight (8) were rated with a score of 5.0, seven (7) were rated with a score of 4.0, and ten (10) with a score of 3.0.There were no areas rated below 3.0.Again, the lower scores in this category are possibly due to some of the difficulty a couple of the subjects encountered when trying to perform tasks involving the selection of menus (regular & pop-ups) that were not intuitive for them.

 

 

 

7.�������� Recommendations

After each evaluation session, the events of the session were reviewed with the participant, and requests were solicited for recommendations to improve the usability of the simulation.The subjects provided the following recommendations:

a.�������� Subject #1

                    Place information sub-window displays (i.e. Contact Data & Watchstander Attribute Displays) on one side of the screen and action/interface displays (i.e. CIC Agent & Shortcut Control Button Displays) on the other side.

                    Change the �Watchstander Tasks & Skills� menu to another name (�Task & Skill Modifiers� recommended) to prevent confusion with the �Watchstander Attributes� menu.

                    Upgrade the CIC Agent Display icons to have all of the �Watchstander Attribute� options from the menu bar in the icon�s pop-up menu.

b.�������� Subject #2

                    On the Shortcut Control Button Display and �File� menu, change the usage of the term �Simulations� to �Scenario� to promote increased familiarity.This term is more recognizable/understandable to the potential users of the system.

                    Upgrade the CIC Agent Display icons to have all of the �Watchstander Attribute� options from the menu bar in the icon�s pop-up menu.

                    Implement a �Zoom In/Out� feature for the map in the Tactical Display.

c.�������� Subject #3

                    Implement a �Zoom In/Out� feature for the map in the Tactical Display.

                    Increase the font size in the Simulation Interface.

                    Rename the �Start/Continue Simulation� button on the Shortcut Control Button Display and in the �File� menu to �Run/Continue� to prevent confusion with �Open Scenario.�

                    Upgrade the CIC Agent Display icons to have all of the �Watchstander Attribute� options from the menu bar in the icon�s pop-up menu.

d.�������� Subject #4

                    Rename the �Watchstander Tasks & Skills� menu to another name (�Probabilities & Tasks� recommended) to prevent confusion with the �Watchstander Attributes� menu.

                    Implement an optional �Simulation Setup Wizard� feature to assist with the configuration of scenarios.

 

 

e.�������� Subject #5

                    Rename the �Watchstander Tasks & Skills� menu to another name (�Probabilities & Tasks� recommended) to prevent confusion with the �Watchstander Attributes� menu.

                    Upgrade the CIC Agent Display icons to have all of the �Watchstander Attribute� options from the menu bar in the icon�s pop-up menu.

g.������� ucd process phase six:interface modification/redesign

Phase Six of the UCD process involved the modifying of the Air-Defense Simulation interface to implement user-design alterations.Eligible modifications were drawn from the quantitative data (charts and graphs) derived from the usability analysis as well as from the qualitative comments provided by the Subject Matter Experts.The figure below displays the updated program interface following the changes.

 

                                                                    Figure 11.                   Updated ADC Simulation GUI following Usability Analysis.

 

iV.���� description of the adc simulation program design AND structure

A.������� program language and SYSTEM REQUIREMENTS for adc simulation

The ADC Simulation was written in the Java Language (Java Development Kit Version 1.3.1) and was developed using the JBuilder 5� Application Development Environment.The simulation was designed to run on a system with the following requirements:

                    Pentium 3 or equivalent and higher.

                    Minimum 256 megabytes of RAM (512 megabytes preferred).

                    A system with Java Development Kit Version 1.3.1 or higher installed.

                    Screen display of 1600 x 1200 pixels.

The processing power and memory requirements are emphasized because the ADC Simulation is a multithreaded program, which places substantial demands on the computer system.Multithreading is a feature of the Java Language, which allows various components in a program (in this case the ADC Simulation) to employ time division multiple access or timesharing on a computing systems resources (single process and memory) to perform multiple tasks in a simulated parallelism.This capability of the Java language was essential to the development of the ADC Simulation because the program attempts to emulate a human activity and process that occurs in parallel.

B.������� discussion about multi-agent systems

The ADC Simulation watchstanders were implemented using a multi-agent system (MAS) technology where each was designed as an �agent�.Within the context of this simulation, an agent is a component of software that:

                    Is capable of acting in an environment;

                    Can communicate directly with other agents;

                    Is driven by tendencies (in the form of individual objectives or of a satisfaction/survival function which it tries to optimize);

                    Possesses resources of its own;

                    Is capable of perceiving its environment (but to a limited extent);

                    Has only a partial representation of this environment;

                    Possesses skills and can offer services;

                    Has behavior that tends toward satisfying its objectives, taking account of the resources and skills available to it and depending on its perception, its representations and the communications it receives.[41]

The watchstander agents in the ADC Simulation contain intent and objectives (perform their assigned duties), communicate amongst each other to achieve their objectives, and possess resources (skill, experience, fatigue, and decision-maker type attributes as well as combat systems equipment).They perceive their environment to a limited extent since each watchstander agent either receives this information via combat systems sensory equipment or through verbal communications (from other watchstander agents) or CIC watchstation information display systems.�� The watchstander agents offer services to each other by disseminating information vital to their performance of air-defense duties and operations of the CIC, and influence within the environment (and other agents) through their actions (i.e. Force TAO classification of aircraft as Hostile).

MAS technology is a blending of the cognitive/social sciences (psychology, ethology, sociology, philosophy), the natural sciences (ecology, biology), and the computer sciences since they simultaneously model, explain, and simulate natural phenomena (in this case human behavior in the ADC Simulation) and provide models for self-organization.[42]Traditionally programming is often very mechanistic, hierarchical, and modular and, subsequently, does not lend itself well to simulating the often surprising (whether organized or chaotic) behavior of interactive human and environmental systems. However, MAS technology is less restrictive in its design, and this produces simulation behavior often more akin to that observed in the real world.The term �multi-agent system� is applied to a system comprising the following elements:

                    An environment, E, that is, a space which generally has a volume.

                    objects, O.These objects are situated; that is to say, it is possible at any given moment to associate any object with a position in E.These objects are passive, that is, they can be perceived, created, destroyed and modified by the agents.

                    An assembly of agents, A, which are specified objects (A O), representing the active entities of the system.

                    An assembly of relations, R, which link objects (and thus agents) to each other.

                    An assembly of operations, Op, making it possible for the agents of A to perceive, produce, consume, transform and manipulate objects from O.

                    Operations with the task of representing the application of these operations and the reaction of the world to this attempt at modification.[43]

In the ADC Simulation, the watchstander agents perform their duties within a layer of environments (Combat Information inside of the AEGIS cruiser within the battle group�s operational area) that contain a multitude of objects (aircraft contacts).The watchstander agents can execute operations to perceive the environment as well as the objects in it and communicate with each other.Conversely, the objects within the ADC Simulation environment can also perform operations to perceive and interact with the AEGIS cruiser (and thus affecting the watchstander agents) and aircraft carrier. These operations are governed by relationships that determine the scope and degree to which the operations can occur.��

The watchstander agents possess several attributes which enhance their simulated performance as human watchstanders. The agents have been imbued with a pervasive intent that drives them to attempt to accurately detect, evaluate, and classify as many aircraft contacts as possible and to take appropriate measures to protect the battle group.This intent to achieve their objective requires a substantial level of collaboration, cooperation, and communication.

1.�������� Coordinated Collaboration

The simulated CIC watch team�s performance of the air-defense duties can be categorized as coordinated collaboration among the watchstander agents.Coordinated collaboration assumes that the agents have compatible goals but possess insufficient skills and resources.

Complex collaboration supposes that the agents have to coordinate their actions to procure the synergic advantages of pooled skills�Coordinated collaboration is the most complex of cooperation situations, since it combines task allocation problems with aspects of coordination shaped by limited resources.[44]

The compatible goals, but insufficient skills and resources closely describe the environment of the CIC (both actual and simulated).Conducting the battle group air-defense duties and managing its battlespace requires an enormous amount of resources, and one agent (or person) would be unable to perform these requirements singly. There simply is not enough time to adequately conduct these operations alone or enough capacity to absorb, process, and act on the information provided, effectively and efficiently.The successful performance of the air-defense duties is primarily due to the collaborative combination of the different sets of skills, experiences, and other attributes of the watchstanders assigned to various watchstations that allows them to coordinate their information and activities to achieve their objectives together.

2.�������� Anticipative-Reactive Agents

Another aspect of the watchstander agents that bears examination is the nature of their behavior in the performance of their duties specifically relating to planning (or failure to plan) for future events.This concept is known as cognitive/reactive opposition and is defined as the

capacity or lack of capacity to anticipate future events and to prepare for them.Reactive agents, by the very fact that they have no representation of their environment or of other agents are incapable of foreseeing what is going to happen, and thus anticipating by planning what actions to take.Cognitive agents, on the other hand, by their capacity for reasoning based on representations of the world, are capable�of memorizing situations, analyzing them, foreseeing possible reactions their actions, using these to decide on conduct during future events, and so planning their own behavior.[45]

With these two ends of the spectrum fixed, where do the watchstander agents in the ADC Simulation fall?The watchstander agents� performance model more closely resembles the description of reactive agents, but they also possess a moderate level of anticipative behavior.In a general sense, the watchstander agents do not conduct complex planning for future events, but on a less sophisticated level, they have anticipative behaviors.At the individual watchstander agent level, many of the agents who must evaluate aircraft contacts as part of their duties use a prioritization selection process (described in Section M of this chapter) to determine which contact they will evaluate next.This selection process employs all contact�s descriptive cues and attributes to predict its threat potential to the battle group.At the CIC team level, the entire air-defense process can be described as an organized, predictive attempt to use all team�s resources to detect and track the aircraft contacts of the greatest potential threat and take appropriate actions to reduce, mitigate, or eliminate those threats before rendered incapable to do so (by attack).

3.�������� Adaptation and Evolution

Adaptation and Evolution are two important concepts in multi-agent systems that deal with the discussion of learning and evolution, respectively.�We can see the problem of structural and behavioral adaptation of an assembly of agents in two different ways:either as an individual characteristic of the agents � and we then talk of learning � or as a collective process brining reproductive mechanisms into play, which we call evolution�.[46]Evolution is not incorporated into the ADC Simulation because the program only deals with a complex problem that only occurs within a very finite period of time.Although adaptation was also not explicitly implemented in the simulation, to a limited extent it occurs as a byproduct of the CIC team�s attempt to classify the aircraft contacts and take appropriate actions to respond to them.The act of classifying the aircraft shifts the perceptions of the watchstanders resulting in a corresponding shift in their behavior in evaluating and acting on the contact in the future.

4.�������� Cooperation within the Multi-Agent System

Cooperation among agents occurs when they engage in a common action after identifying and adopting a common goal, an essential element in social activity.[47]Cooperation is an essential attribute in an actual CIC performing air defense, and this concept was implemented into the development of the ADC Simulation CIC team.However, it is difficult to quantify or qualify the occurrence of cooperation within a multi-agent system by examining the simulation�s internal specifications.Based on E H. Durfee and T. Bouron discussions, Ferber defined the verification of cooperation

�as a description of the activity of an assembly of agents by an external observer who need have no access to the mental states of the agents�For example the behavior of ants is described as cooperative, this is because, when we observe them, we observe a certain number of phenomena which are used as indicators of cooperative activity.The idea of a cooperation indicator is of particular interest, for it allows us to get away from the internal characteristics of agents and consider their observable behavior.[48]

Consequently, the following indicators were postulated to describe cooperative activity:

                    The coordination of actions, which concerns the adjustment of the direction of agents� actions in time and space.

                    The degree of �parallelization�, which depends on the distribution of tasks and their concurrent execution.

                    The sharing of resources, which concerns the use of resources and skills.

                    The robustness of the system, which concerns the system�s aptitude for making up for any failure by an agent.

                    The non-redundancy of actions, which characterizes the low rate of redundant activities.

                    The non-persistence of conflicts, which testifies to the small number of blocking situations.

An observer can determine whether cooperative behavior is occurring by applying these indicators of which the first four are relevant to the ADC Simulation.As for the first indicator, the coordination of actions and cooperation of the watchstander agents results in an adjustment in their actions, which is typified when a watchstander makes an initial detection of an aircraft contact.Once the first report of the contact is transmitted, the other key sensory watchstanders will focus their attention on the aircraft contact with the purpose of collecting all relevant information about it as quickly as possible and passing it to the Force TAO for a subsequent classification. This focused response by the watchstander agents reduces the overall time required to gather the contact�s necessary data while providing the Force TAO with as much information as possible to make an informed decision regarding its classification than otherwise would have been possible without cooperation.

The second indicator, parallelization of effort (based on distributed tasks and their concurrent execution), occurs within the simulated CIC team as a product of the watchstander agent�s performance of their watchstation duties.Each agent has a different but vital role to perform in the air-defense process, and they execute their duties in parallel with each other to achieve the collective objective.

Sharing of resources, the third indicator, is also present in the simulation as demonstrated by the watch team coordination discussed above.The watchstander agents can be considered, to a certain extent, resources and skills themselves that are available to the Force TAO in the performance of the air-defense duties.Whenever the Force TAO or other watchstander makes a request or order to another watchstander, the subsequent performance of that task by the specified agent is an example of the sharing of that watchstander�s time and skills.

The fourth indicator, robustness, is a cornerstone of both actual and simulated CIC operations.Each of the sensory watchstanders provides a different aspect of the air-defense picture to help the Force TAO visualize the overall situation, and it is expected that some of the watchstander information provided may conflict.The purpose of having these varied sensory pictures (radar, IFF, Link 11/16, ES, queries/warnings, and visual identification) is to deliver to the Force TAO enough information that if conflicts do exist, either the incorrect data will easily be discovered (and discarded) or the Force TAO realizes the ambiguity of the situation and takes appropriate actions to alleviate its nebulousness.In either case, the multiple data inputs by the CIC watch team are designed to overcome failures and mistakes by one or more watchstanders.

5.�������� Connector-Based Multi-Agent Systems (CMAS)

Many of the interactions and communications among watchstander agents are coordinated by the use of interface components called connectors.As defined by John Hiles, connectors coordinate the activities of multiple agents.

In our software world connectors have the following operations.�� They can be extended, which means that their type information is known outside of the agent, or they can be retracted, in which case the type information is pulled back inside the agent.An extended connector is waiting for a complementary or matching connector.When two connectors match, the operation is called a connection.[49]

In the ADC Simulation, connectors are used in the communication aspects between the CIC watchstanders, communications between the watchstanders and aircraft contacts, and to govern the actions of the CIC watchstanders on the aircraft contacts (i.e. query, engage with missiles, etc.).Each of the watchstander agents has connectors that can integrate with any of the other agents, if an information exchange is desired.This capability is most frequently manifested in the passing of message reports concerning the attributes of aircraft contacts under evaluation.Sets of connectors also exist to allow the watchstander agents to interact with the aircraft contacts through both communications and actions.The implementation of connectors into the simulation produced a significant level of flexibility in the communications and interactive aspects of the program because it enabled the agents as well as the objects (aircraft) to establish and maintain a more realistic interface.

C.������� OVERALL VISUAL DESIGN OF THE SIMULATION

1.�������� Tactical Display

The Tactical Display, show below in Figure 12 is the center of visual activity

in the ADC Simulation program.It is on this screen where the user is provided a view of the Arabian Gulf region to observe the events the interaction between the AEGIS cruiser and the aircraft contacts throughout the operational area.The Tactical Display shows the movement of the aircraft contacts across the battle group�s airspace and uses color backgrounds to indicate the status of the aircraft with respect to the CIC team�s air-defense process.In the ADC Simulation, aircraft contacts only have three classifications, Friendly, Neutral, and Hostile, and these classifications are used when the user looks at the actual identification information in the CIC Contact Display (discussed in the next section).Friendly aircraft consist of only U.S. aircraft while Neutral aircraft are always commercial airliners.Iranian and Iraqi military aircraft are designated as Hostile regardless of their intentions (i.e., they are still hostile even on a patrol and not exhibiting an attacking behavior).

                                                                                                           Figure 12.                   ADC Simulation Tactical Display.

 

Though these classifications provide clear boundaries for distinguishing types of aircraft, in the reality of the perceived environment of the CIC team five classification types are possible, Friendly, Neutral, Unknown, Suspect, and Hostile.These additional classifications used by the Navy acknowledge that a CIC team may never have complete information on an aircraft contact.The Suspect classification indicates that an aircraft�s overall behavior is potentially hostile, but enough data does not exist to reach that conclusion.By classifying an aircraft as Suspect, it cues the team to give it a greater attention since it could be more of a threat than other aircraft.The Unknown classification is the �catch-all� category for the CIC team for aircraft with inadequate information to make a relevant decision about identification.In the ADC Simulation, all aircraft contacts are initially classified as Unknown.Figure 4 shows the classification icons used in the ADC Simulation.

 

                                                                                         Figure 13.                   ADC Simulation Aircraft Classification Icons.

 

If the aircraft�s background is highlighted in green, then it has not been detected and processed by the Radar Systems Controller watchstander.Conversely, if the background color is yellow, the Force TAO has classified the aircraft incorrectly (i.e. classified hostile when it was a friend).The Tactical Display is also interactive and allows the user to select contacts to view their data, modify aircraft contact attributes (discussed in a following section), and modify other scenario attributes.

If the aircraft�s background is highlighted in green, it has not been detected and processed by the Radar Systems Controller watchstander.Conversely, if the background color is yellow, the Force TAO has classified the aircraft incorrectly (i.e. classified hostile when it was a friend).The Tactical Display is interactive and allows the user to select contacts to view their data, modify aircraft contact attributes (discussed in a following section), and modify other scenario attributes.

2.�������� Contact Data Display

This (shown below in Figure 14) displays two sets of data concerning an aircraft contact selected by the user in the Tactical Display window.The first set of data labeled �CIC Actual Data� is the ground truth information containing the valid data about the aircraft.The second set of data labeled �CIC Perceived Data� displays the data determined by the simulated CIC team based on their perceptions and performance.

 

                                                                                                                            Figure 14.                   Contact Data Display.

 

If the watchstanders have made errors during their assessment of the contacts, then information in this part of the display will differ from that in the CIC Actual Data portion.

3.�������� Scenario Control Buttons

The Scenario Control Buttons display provides the user with shortcuts for tasks frequently used during the running of scenarios.These options include increasing

                                                                                                          Figure 15.                   Scenario Control Buttons Display.

the time compression, decreasing the time compression, starting / continuing a scenario, pausing a scenario, and stopping a scenario.The display is listed above in Figure 15.

4.�������� CIC Watchstander Display and Watchstander Attributes Display

The CIC Watchstander Display (Figure 16) is an interactive component that gives the user both a top-view look at the team and the option to modify various watchstander attributes.The interactive portion of the display consists of watchstander icons (circles) which can be selected with the mouse.If the user clicks on an icon, the corresponding watchstander�s attributes (skills, experience, etc.) will be displayed in the Watchstander Attributes Display (also shown below in Figure 5).In the CIC Watchstander Display,watchstander icons are color-coded.Icons in blue designate watchstanders who are primarily assigned to collect and assess sensory information about aircraft contacts for dissemination to the CIC team.Icons shown in red are primarily decision-makers who act on the information provided by the sensory watchstanders and give orders to the watchstanders in yellow who are the primary personnel to carry out defensive and offensive actions.

In addition to these icons, each of the watchstanders has a Mental Activity Indicator (MAI) at the top of the watchstation location.The MAI displays the status of the task currently being performed by the watchstander and will flash red (high priority), yellow (medium priority), or green (low priority).��

                                                           Figure 16.                   CIC Watchstander Display and Watchstander Attributes Display.

 

The MAI provides an indication of the mental load and stress the watchstander is experiencing and will correspond to the level of overall activity in a given scenario.

d.������� adc simulation program:menu options

The following is an outline of the available options from the Main Menu Bar.

                                                                                                           Figure 17.                   ADC Simulation Main Menu Bar.

 

The following is a listing of the available options from the Main Menu Bar.

1.������ File Menu Options

                    Open Scenario (not implemented)

                    Close Scenario (not implemented)

                    Close & Save Scenario (not implemented)

                    Set Scenario Time Length

                    Run/Continue Scenario

                    Pause Scenario

                    Stop Scenario

                    Increase Time Compression

                    Decrease Time Compression

                    Scenario Sound ON/OFF Option

                    Run Scenario Setup Wizard

                    Exit Program

2.������ Watchstander Attributes Menu

                    Set Skill Levels

                    Basic

                    Experienced

                    Expert

                    Set Experience Levels

                    Newly Qualified

                    Experienced

                    Expert

                    Set Fatigue Levels

                    Fully Rested

                    Tired

                    Exhausted

                    Set Decision-Maker Type

                    Cautious

                    Balanced

                    Aggressive

3.�������� CIC Equipment Setup Menu

                    Set Equipment Readiness Levels

                    Fully Operational

                    Partially Degraded

                    Highly Degraded

                    Non-operational�

 

 

                    Set Scenario Equipment Failure Option

                    Enabled

                    Disabled

                    Input Equipment Settings (not implemented)

4.������ Scenario External Attributes Menu

                    Set Weather Conditions

                    Clear Weather

                    Heavy Rain

                    Heavy Clutter

                    Set Contact Density

                    Low

                    Medium

                    High

                    Set Scenario Threat Level

                    White

                    Yellow

                    Red

                    Set Hostile Contact Level

                    Low

                    Medium

                    High

5.������ Doctrine Setup Menu

                    Set Air Defense Commander Doctrine (not implemented)

                    Set AEGIS Doctrine

6.�������� Simulation Logs Menu

                    Open Scenario Event Log

                    Open Watchstander Decision History Log

                    Open CIC Equipment Status Log

                    Open Watchstander Performance Log

                    Analyze/Parse Scenario Logs

 

 

7.�������� Task Times and Probabilities Menu

                    Modify Attribute Probabilities

                    Set Maximum Task Times

8.�������� Time Factor Ratio and Simulation Time Windows

These windows inform the user of the current time in the simulation and the ratio of simulation time to standard time.Scenarios always start at 1200 hours and use military time references.The highest level of time compression allowed in the simulation is sixty-four seconds of simulation elapsed time for every second of elapsed actual time (64:1).

E.�������� design/structure of AIRCRAFT contacts

1.������ Overview

The aircraft contact is the fundamental object in the simulation and, consequently, has the greatest level of detail (aside from the agents), complexity, and behaviors.Similar to air defense in the real world, the entire simulation (specifically the watchstander agents) is focused on the detection, processing, and classification of these contacts by the CIC team and producing the resultant log and performance data.As Figure 6 shows, an aircraft contact object is divided into two components, the Actual Contact Data module and the CIC Team Perceived/Determined Contact Data module.The Actual Contact Data module contains the correct attributes of the contact and is only available to the user via the GUI display (Contact Data Display).The CIC Perceived/Determined Contact Data module contains the perceived characteristics/data concerning the specified contact and is based on the watchstander agents� analysis of the contact, which is affected by experience levels, skill levels, fatigue levels, etc.Data within this module is available to both the CIC watch team and the user via the GUI display.

 

                                                                                                      Figure 18.                   Generalized Aircraft Contact Object.

 

An aircraft contact has the following perceived and actual attributes:

                    Track Number (e.g. 50001)

                    Course (0-359 degrees true)

                    Speed (0-1800 nautical miles per hour)

                    Altitude (0-60000 feet)

                    Electronic Signal (ES) Emissions (based on radar type)

                    IFF Mode 1

                    IFF Mode 2

                    IFF Mode 3

                    IFF Mode 4

                    IFF Mode C

                    Point of Origin

                    Classification (Hostile, Suspect, Unknown, Neutral, Friend)

                    Radar Cross Section (Large Aircraft, Fighter Aircraft, Missile)

In addition, the aircraft has the following administrative attributes that are used by the watchstander agents and the simulation to report the status of the contact:

 

 

                    Whether contact is currently detected by radar

                    Whether contact has been evaluated by a watchstander (one for each watchstander)

                    The last time a watchstander evaluated the contact (one for each watchstander)

                    Whether the contact is closing on the AEGIS cruiser

2.�������� Aircraft Behaviors

a.�������� Neutral Aircraft

In the simulation, neutral aircraft (commercial airliners) have a small set of behaviors that determine their flight profiles.These aircraft will typically only be flying profiles that take them from a starting location to a destination point.The aircraft will change courses sometimes to adjust their flight path and once within a specified distance from their destination airport, they will commence a landing/descent profile.If the neutral aircraft�s descent is in the vicinity (and approach) of the AEGIS cruiser, this could become a significant concern for the CIC team.The following is are neutral-aircraft behaviors:

                    Fly from intra-theater airport to intra-theater airport.

                    Fly from off-screen location to intra-theater airport.

                    Fly from intra-theater airport to off-screen location.

                    Do Landing/Descent flight profile for approach to destination airport.

                    Do Retreat/Alter course flight profile (caused by warning from cruiser).

                    Respond to query from AEGIS cruiser.

                    Respond to warning from AEGIS cruiser.

                    Experience in-flight casualty to aircraft radar (loss of radar/ES signal).

                    Experience in-flight casualty to aircraft IFF system (loss of IFF signal).

b.������ Hostile Aircraft

The hostile aircraft in the ADC Simulation possess the most robust and dynamic behavior of all aircraft contacts.These aircraft either start their flights at airports in either Iran or Iraq or in random locations within one of those countries.They have the potential for varied flight profiles and can have a multiple number of waypoints assigned to their missions.The combination of dynamic behavior and multiple waypoints creates a varied level of unique situations that can potentially stress the CIC watch team.The Scenario Threat Level also influences this behavior, and the higher the level, the more likely a hostile contact will attack the battle group.The hostile contacts consist of either fighter or patrol aircraft and have the following behaviors:

                    Fly from intra-theater airport to multiple waypoints within the theater and back to the home airport.

                    Fly from intra-theater airport to off-screen location.

                    Fly from intra-theater location (other than an airport) to multiple waypoints within the theater and back to an airport.

                    Fly from intra-theater location (other than an airport) to multiple waypoints within the theater and back to original intra-theater location.

                    Fly reconnaissance in the vicinity of the AEGIS cruiser and/or aircraft carrier (multiple waypoints that circle at varied ranges).

                    Do High/Low Speed and High/Low Altitude approach to AEGIS cruiser and/or aircraft carrier without attacking (followed by a retreat).

                    Do High/Low Speed and High/Low Altitude approach to AEGIS cruiser and/or aircraft carrier that finishes with an ASM attack.

                    Do Landing/Descent flight profile for approach to destination airport.

                    Do Retreat/Alter course flight profile (caused by warning from cruiser).

                    Respond to query from AEGIS cruiser.

                    Respond to warning from AEGIS cruiser.

                    Experience in-flight casualty to aircraft radar (loss of radar/ES signal).

                    Experience in-flight casualty to aircraft IFF system (loss of IFF signal).

c.�������� Friendly Aircraft

The friendly contacts in the ADC Simulation are either fighter or support (E-2C Hawkeye) aircraft directly controlled by the CIC team.They assist the ADC and have the following behaviors:

                    Alert launch from the aircraft carrier.

                    Conduct visual intercept and identification of an aircraft contact.

                    Conduct intercept and engagement of an aircraft contact.

                    Fly/Return to assigned patrol location.

                    Return to base (aircraft carrier).

 

 

3.�������� Aircraft Contact Generation Module

Within the ADC Simulation, neutral, hostile, and friendly aircraft contacts are created by the Aircraft Contact Generation Module (ACGM), which then injects them into the running scenario.For hostile and neutral aircraft, the module generates these aircraft based on the attributes of Contact Density and Hostile Contact Level.Friendly aircraft are created by the ACGM when specifically ordered by the Force TAO or Force AAWC agents (which have directed the carrier�s alert aircraft to launch).Surface-to-Air Missiles (SAM) and Anti-ship Missiles (ASM) contacts are only created when either the AEGIS cruiser (via the watch team) or the hostile aircraft order missile launches.The ACGM is executed once every second and has the following probability of creating an aircraft based on the following Contact Density options:

                    Contact Density Low � 0.0067 (1 out of 150 chance)

                    Contact Density Medium � 0.01 (1 out of 100 chance)

                    Contact Density High � 0.02 (1 out of 50 chance)

Given these probabilities, for every new contact generated, the probability it is hostile is based on the following Hostile Contact Level options:

                    Hostile Contact Level Low � 0.05 (1 out of 20 chance)

                    Hostile Contact Level Medium � 0.10 (1 out of 10 chance)

                    Hostile Contact Level High � 0.25 (1 out of 4 chance)

f.�������� RELEVANT SIMULATION POP-UP WINDOWS

1.�������� Modify Contact Attributes Window (Figure 19)

The Modify Contact Attributes Window allows the user to interact directly with aircraft contacts and change most of their attributes.Modifiable attributes include speed, altitude, classification (actual), IFF modes, and the aircraft�s radar electronic signature.These options provide the user with an increased flexibility in the conduct of scenarios.

 

                                                                                            Figure 19.                   Modify Contact Attributes Popup Window.

 

2.�������� Scenario Setup Wizard Selection Window(Figure 20)

The Scenario Setup Wizard Selection Window offers the user several options for configuring a scenario quickly.The user can choose from configuring just the watchstanders, CIC equipment, or external scenario attributes, or can configure all of these options.

 

                                                                                  Figure 20.                   Scenario Setup Wizard Selection Popup Window.

 

 

 

3.�������� Select Specific Contact Window(Figure 21)

This popup window gives the user the opportunity to specifically select an aircraft contact for observation by inputting its track number.If a valid track number is entered, the aircraft�s information appears in the Contact Data Display window.

 

                                                                                                Figure 21.                   Select Specific Contact Popup Window.

 

4.�������� Scenario Run Time Input Window(Figure 22)

The Scenario Run Time Input window allows the user to set the total time in hours the simulation will run.The default is five hours of simulated run time.

 

                                                                                              Figure 22.                   Scenario Run Time Input Popup Window.

 

G.������� design/structure of watchstander agents

1.�������� Watchstander Attributes

a.�������� Skills

In the ADC Simulation, skill attributes specify the capability of a watchstander to perform a specified task related to that skill.For example, the Radar System Controller�s Radar Operations and Track Evaluation Skill would be employed every time the RSC evaluated potential aircraft radar detection.To determine the success of performing a given task, each skill for a watchstander has an assigned probability of success based on the skill level assigned to the watchstander.There are three levels for skill attributes, Basic, Experienced, and Expert, for which the associated probabilities of success values are 0.70, 0.80, and 0.90.Little information found to provide guidance for determining skill levels or the probability of success values associated with them.Consequently, during the interview with the ATRC detachment San Diego air-defense experts, this topic was discussed in great detail.[50]From these interviews, a consensus was formed that the three skill levels would be appropriate.The Basic level was considered to be a watchstander who had between zero to six months of time performing the given skill.A watchstander assigned the Experienced level had six months to a year of time performing the skill including either an entire ship inter-deployment training cycle and/or half of a deployment.An Expert watchstander had performed the skill for over a year including an entire deployment.Assignment of the skill level is left to the user of the ADC Simulation.Since the values of the probability of success were highly subjective, a way is provided to modify them if further studies were conducted or the user simply disagrees with them.The criteria used to determine a watchstander�s skill level was not considered to be absolute because all air-defense experts could easily recount past experiences where a watchstander who had been performing certain tasks (as part of their watchstation) for an extended period time were still operating at a skill level not commensurate with the length of time.

Additionally, each skill had an associated maximum task performance time.For many of the skills, the values had to be determined with the same interview methods discussed above.

b.�������� Experience

During the interviews with the air-defense experts, a question of high interest to us was whether there was a difference between skill and experience in this domain.[51]The overwhelming response from the interviews was an affirmation that a difference existed, and experience influenced the performance of a skill.As stated above, skill level is the probability of success in a specified task.Experience was considered to be an attribute that was strongly linked to the amount of time the watchstander had been qualified to perform in a given watchstation.The experience levels were categorized as Newly Qualified, Experienced, and Expert.It was noted that the experience level often influenced the time a watchstander needed to complete a task as well as the quality and accuracy of the performance.Consequently, we postulated that the experience attribute would have two effects on the watchstander performance.First, the higher the level of experience, the less time for the watchstander to perform a task: for the Experienced and Expert attributes, a ten percent and twenty percent reduction in time, respectively.Second, for watchstanders that evaluate aircraft contacts as part of their duties (Force TAO, Force AAWC, Ship TAO, Ship AAWC, RSC, EWCO, TIC, IDS, and RC), there is an Evaluation Confidence attribute for each contact (for every watchstander) that is updated every time the watchstander assesses a contact.Each time the watchstander selects a contact for evaluation, a conclusion (based on probability) is made as to whether the watchstander�s confidence level for that aircraft�s assessment is high enough to maintain the original evaluation.The more evaluations conducted on a contact by the watchstander, the more confident the agent will be in the original assessment (and less likely to change it).For all watchstanders, the baseline Evaluation Confidence value begins at thirty, but the update value is increased at a different rate for Newly Qualified, Experienced, and Expert levels (two, four, and six points, respectively).Additionally, there is a maximum confidence threshold level that is based on the watchstander�s experience level and is assigned ninety five (95), ninety (90), and eighty five (85) for Newly Qualified, Experienced, and Expert respectively.This means that the higher the experience level, the more confident the watchstander will become in its assessments of the aircraft contacts.

Since applicable experimental data on this facet of watchstander performance could not be found, values were selected after consultations with the air-defense experts.To account for this subjectivity, the user can modify the experience level attributes in the Task Times and Probability menu (discussed in Section L).

c.�������� Fatigue

The fatigue attribute in the simulation allows the user to set a static level of readiness for each watchstander.

Although well-trained and physically fit naval personnel have a tremendous reserve capacity and can function under high stress workloads for surprisingly long periods of time�sustained conditions like that found during long periods of general quarters (GQ) can lead to fatigue and sleep deprivation, the cost being degraded performance.The negative effects of sustained readiness during Condition I [General Quarters] or II [relaxed GQ/Battle stations] are cumulative, and involve degradation of critical thinking, reaction time, accuracy, memory, coordination, communication and crew mission integrity.[52]��

Based on the interviews with the air-defense experts and a review of this report, a simple qualitative model was implemented to incorporate declining performance related to fatigue, and three attribute levels were selected, Fully Rested, Tired, and Exhausted.A Fully Rested watchstander was considered to have received a minimum of five hours of rest without having performed any kind of heavy physical labor (underway replenishment duties or working detail moving supplies) or stood any watch (between the rest period and commencing the watch duties).A Tired watchstander was a person who had received a minimum of three hours of sleep or had been performing watchstation duties for at least six hours without interruption (for rest) in a fairly demanding environment.A watchstander who had performed a heavy level of physical labor before assuming watchstation duties also applies.An Exhausted watchstander was considered to have received less than three hours of sleep, performed a heavy level of physical labor before assuming watchstation duties, or had been performing these duties in excess of six hours in a demanding environment.In the simulation, the fatigue attribute was designed to affect the skill attributes of watchstanders by penalizing the probability value for a watchstander completing a task as well as the length of time to complete the task (greater probability of a longer time).[53]A skill performance penalty does not apply for a Fully Rested watchstander, but a Tired and Exhausted watchstander are penalized �0.10 and �0.20 to the agent�s probability value for successful performance of a task.

d.�������� Decision-Maker Types

During the initial phase of the formal literature review and air-defense expert interviews, there was no plan to incorporate a Decision-maker type attribute, but this concept was proposed while discussing CIC watchstander attributes.Many of the experts believed strongly in the need to include such an attribute because all of them felt that the differences among types of primary decision-makers in the CIC (Force TAO, Force AAWC, Ship TAO, Ship AAWC) moderately influenced the overall aircraft classification process.Three types of decision-makers proposed from the discussions were Cautious, Balanced, and Aggressive, and it was determined that the attribute would relate to the length of time the watchstander gave before making a classification decision about an aircraft contact.In the simulation, the maximum time values assigned for the Cautious, Balanced, and Aggressive options were thirty, twenty, and fifteen seconds respectively.The time assigned for a specific evaluation varies based on a probability between zero and the maximum time value.Since applicable experimental data on this facet of watchstander performance could not be found, values were selected after consultations with the air-defense experts.

2.�������� Watchstander Communication

The essence of the operation/performance of a watchstander is the receipt, processing, and transmissions of messages during the simulation.A watchstander receives many requests and orders as well as generates requests for other watchstanders.All of these are translated into messages within the simulation, and the watchstander agent processes these messages according to priority.Whenever a watchstander transmits a message, there is a check against the watchstander�s Communication Skill attribute to determine whether the message was passed to the other watchstanders successfully (with an associated probability).Additionally, experience level and maximum communication task time determine an delay for the transmission of the message to the other watchstanders.

The watchstander agent message handling and task execution process was implemented as shown in Figure 7.The structure includes an input-message reception queue, a message-priority processor, priority queues (low, medium, high), an action processor, and an output message transmission queue.Watchstander agents place order/request messages into another watchstander�s input-message queue where they will be processed for execution.

                                                                       Figure 23.                   Message Handling Structure for all Watchstander Agents.

 

a.�������� Input/Receive Message Queue

The Input/Receive Queue receives messages from other watchstander agents and combat systems equipment.The messages in this queue are scanned, sorted, and forwarded to another set of queues based on the priority of the message (high, medium, low) by the Message Priority Processor.

b.�������� Watchstander Message Priority Processor

The Message Priority Processor pulls messages from the Input/Receive queue, checks their priority, and places them into the appropriate priority queue.

c.�������� High/Medium/Low Priority Message Queue

These are the priority queues that contain the sorted messages for action/processing by the watchstander agent.The watchstander agent will process these messages according to the priority level.Once a message has been processed and the required action conducted, a report message may be transmitted to other watchstander agents.

d.�������� Watchstander Action Processor

The action processor pulls an order message (based on priority) from one of the queues and performs the task indicated.The watchstander agent�s skill and experience attributes are applied to the order and the task is performed (or attempted, since the watchstander could fail to perform it).If the task is successfully completed, the Watchstander Action Processor generates a report message and places it in the Output/Transmit Message queue.

 

e.�������� Output/Transmit Message Queue

The Output/Transmit Message Queue contains the outgoing messages for transmission to the other watchstander agents.

 

#

Name/Action

From

To

Priority

1

Report ES Contact

EWCO

ALL

ANY

2

Request ES Contact

ANY

EWCO

ANY

3

Report SLQ-32 System Casualty

EWCO

ALL

HIGH

4

Report Radar Contact

RSC

ALL

ANY

5

Report SPY-1B Radar Casualty

RSC

ALL

HIGH

6

Report IFF Contact

IDS/RC

ALL

ANY

7

Report IFF System Casualty

IDS/RC

ALL

HIGH

8

Report Communications System Casualty

IDS/RC

ALL

HIGH

9

Report VLS Casualty

MSS

ALL

HIGH

10

Report CIWS Casualty

MSS

ALL

HIGH

11

Report Missile Engagement Kill Status

MSS/RSC

ALL

HIGH

12

Report Link 11/16 System Casualty

TIC/CSC

ALL

HIGH

13

Report Link 11/16 Contact

TIC

ALL

ANY

14

Order IDS to perform Query of an aircraft

FTAO/STAO

ALL

HIGH

15

Order IDS to perform Warning of an aircraft

FTAO/STAO

ALL

HIGH

16

Order Missile Engagement of an aircraft

FTAO/STAO

ALL

HIGH

17

Order Aircraft Engagement of an aircraft

FTAO/STAO

ALL

HIGH

18

Order Aircraft Intercept/Visual Identification of an aircraft

FTAO/STAO

ALL

HIGH

19

Report FTAO Classification of an aircraft/missile

FTAO

ALL

ANY

20

Report CIWS Kill Status of an aircraft/missile

MSS/RSC

ALL

ANY

21

Report IDS Query/Warning results

IDS

ALL

HIGH

 

                                                                                                          Table 14.        Listing of Watchstander Messages.

 

3.�������� Watchstander Agents Skill Listings

                    Force Tactical Action Officer (F-TAO)

                    Situation Assessment

                    Information Management

                    Battle Doctrine

                    Platform Knowledge

                    Force Anti-Air Warfare Coordinator (F-AAWC)

                    Situation Assessment

                    Information Management

                    Battle Doctrine

                    Platform Knowledge

                    Ship Tactical Action Officer (S-TAO)

                    Situation Assessment

                    Information Management

                    Battle Doctrine

                    Platform Knowledge

                    Ship Anti-Air Warfare Coordinator (S-AAWC)

                    Situation Assessment

                    Information Management

                    Battle Doctrine

                    Platform Knowledge

                    Combat Systems Coordinator (CSC)

                    Situation Assessment

                    Combat Systems Troubleshooting

                    Platform Knowledge

                    AEGIS Doctrine Employment

                    Communications

                    Radar Systems Controller (RSC)

                    Radar Operations & Track Evaluation

                    Radar Jamming/Deception Track Evaluation

                    Radar Casualty Troubleshooting

                    Communications

                    Electronic Warfare Control Officer (EWCO)

                    Situation Assessment

                    Information Management

                    Battle Doctrine

                    Communications

                    Identification Supervisor (IDS)

                    Situation Assessment

                    Information Management

                    Battle Doctrine

                    Communications

                    Tactical Information Coordinator (TIC)

                    Battle Group Link Equipment Knowledge

                    Link Communication

                    Battle Group Link Coordination/Resolution

                    Communications

                    Red Crown (RC)

                    Aircraft Control

                    Carrier Operations

                    IFF System Operations

                    Communications

                    Missile Systems Supervisor (MSS)

                    Missile Systems Employment

                    CIWS Employment

                    Missile/CIWS Systems Troubleshooting

                    Communications

H.������� combat information center (cic) Combat systems equipment

1.�������� Overview

The performances of the CIC equipment in the ADC Simulation were simplified to avoid a substantially complex software component to simulate their operations while still ensuring that the qualitative performance was maintained.Additionally, this abstraction prevented the need to use information classified by the United States Navy, restricting the potential audience.

The CIC equipment can be assigned one of four levels of readiness: Fully Operational, Partially Degraded, Highly Degraded, and Non-operational.Each has an associated performance probability value as in the table below indicates.

 

 

 

 

 

 

 

Fully Operational

Partially Degraded

Highly Degraded

Non-operational

Probability Value out of 1

1.0

0.75

0.50

0.0

 

                                                                                                   Table 15.        CIC Equipment Levels of Performance.

 

These values indicate the probability of success of the given equipment in performing a task (i.e. the SPY-1B radar detecting a contact) based on the level of system degradation (if any).Equipment that is fully operational will always be able to perform a task successfully (although using the SPY-1B Radar as an example, it still may not be able to detect a contact).

In addition to presetting the operational readiness levels of CIC combat systems equipment, if the user has activated the Scenario Equipment Failure Option then any of the systems could sustain a casualty based on a random probability.When a casualty occurs, the watchstander agents will respond and attempt to troubleshoot the origins of the degradation.If successful, the watchstander will return a repair time; otherwise, the watchstander will have to continue to troubleshoot until successful.

In the ADC Simulation, the combat systems equipment was implemented as objects within the watchstander agents, which simplified interaction with them.The following systems are associated with the specified watchstander agents:

 

Combat Systems Equipment

Watchstander

SPY-1B Radar System

Radar Systems Controller

SLQ-32 System

Electronic Warfare Control Officer

IFF System

Identification Supervisor

Link 11/16 System

Tactical Information Coordinator

External Communications System

Identification Supervisor

Vertical Launching System

Missile Systems Supervisor

Close-In Weapon System

Missile Systems Supervisor

 

                                                                        Table 16.        Systems Associated with Specified Watchstander Agents.

 

2.�������� SPY-1B Radar System

The SPY-1B Radar System is the primary air search and tracking radar on AEGIS cruisers and is a central component in the AEGIS Weapon System.An advanced phased-array radar, it can simultaneously track hundreds of aircraft contacts while conducting search operations to detect new aircraft.A powerful and highly sensitive system, the radar can track aircraft hundreds of miles from its location.Additionally, the ship�s surface-to-air missiles are guided using the SPY-1B Radar and, consequently, a serious casualty to the radar affects the performance of those missile engagements.

Due to classification restrictions, actual SPY-1B phased array radar formulas could not be used, so the radar had receiver operating characteristics from Swerling II statistics as shown below.[54]

PD (Required) = PF (1/(1 + CNR (Required)))

To implement the formula, a baseline probability of detection (PD) for the SPY-1B radar and Probability of False Alarms (PF) had to be selected, then the Carrier-to-Noise Ratio (CNR) from the Swerling II statistics charts.The probability of detection was selected from the AEGIS SPY-1B Radar Sphere Calibration Test Procedure (Naval Sea Systems Command) used by ships to perform system calibration evaluations.

PD(Required) = 0.5 for a 0.254 meters (ten inch diameter) sphere at 27.78 kilometers (15 nautical miles).

PF(Required)= 10-4

Based on the above selections, the Swerling II chart yielded a CNR of 16 decibels (dB).

These values are then applied to the formula below to solve for K.

CNR (Required)= K * ( / R (Required)4 )

= Radar Cross Section of the calibrated sphere = * (radius of sphere)2

���� = * (0.127 meters)2 = 0.5067 meters squared

After manipulating the formula and inserting the known values (from above), K is calculated to be 1.881 x 1020 (no units) for the calibrated sphere.Once the value of K is determined, the above formula can be used to calculate the CNR for an aircraft contact at any range (CNR (Detected)).Additionally, the RCS value σ updated to account for the radar cross section of the aircraft, and the values of one (1), one hundred (100), and one thousand (1000) square meters is used for the missiles, fighter aircraft, and large aircraft (patrol, commercial airliner), respectively.This slightly modified formula is listed below:

CNR (Detected)= K * ( / R (Detected)4 )

The value of the aircraft�s CNR (Detected) is then applied to the initial formula, which has been slightly modified also to calculate the probability of detection of any aircraft desired.

PD (Detected) = PF (1/(1 + CNR(Detected)))

This will now yield the probability of detection for a given aircraft contact.

3.�������� SLQ-32 Electronic Signal Detection System

The SLQ-32 System is used to detect the electronic signals emitted by aircraft and shipboard radar systems.The signals detected are then analyzed by the watchstander with the assistance of the system computer, and a platform (ship and/or aircraft) or listing of platforms is displayed.The SLQ-32 system only provides a degree bearing to the electronic signal so the watchstander must attempt to estimate its distance by the strength of the signal received.In the ADC Simulation we can detect the following ES signals:

                    Iraqi Fighter Aircraft

                    Iraqi Patrol Aircraft

                    Iranian Fighter Aircraft

                    Iranian Fighter Aircraft

                    Iranian Patrol Aircraft

                    Commercial Aircraft

                    F/A-18 Hornet Aircraft (Friendly Fighter)

                    E-2C Hawkeye Aircraft (Friendly Support Aircraft)

                    Hostile Fire Control Radar (Hostile Aircraft or Missile)

                    Friendly Missile

�Hostile fire control radar� detection indicates that either an aircraft or an anti-ship missile has targeted the cruiser or carrier and is ready to fire missiles, or it is a missile homing in on the ship.

Due to classification restrictions, actual SLQ-32 system operational models could not be used, so an abstracted model was created using a generic representation to achieve similar qualitative performance.This model is displayed in the table below:

 

ES Signal Detection Ranges

Probability of Detection

0 � 25 nautical miles

0.99

26-50 nautical miles

0.95

51-100 nautical miles

0.85

101-150 nautical miles

0.70

151-200 nautical miles

0.50

201-250 nautical miles

0.35

> 250 nautical miles

0.25

 

                                                                          Table 17.        Abstracted Model SLQ-32 System Operational Model.

 

4.�������� Identification Friend or Foe (IFF) System

The Identification Friend or Foe (IFF) System is designed to recognize friendly and neutral aircraft.�With supersonic aircraft and swift antiaircraft missiles, there is no time to identify friendly forces by visual means.IFF is an electronic system which can determine the intent of an aircraft with the speed of the fastest computers.�[55]In addition to military aircraft, civilian aircraft operate modified IFF systems.IFF operates through an interrogation-response electronic system.The interrogation system on an aircraft or ship sends out an interrogation message pulse that, when received by the transponder systems on another platform, makes it respond with that platform's specified IFF codes.When the IFF response code is received, the originating IFF system will analyze the signal and display the results to the appropriate watchstanders.There are five categories also known as modes for IFF systems, and they are explained in the table below.

 

 

 

 

 

IFF MODE

 

PURPOSE

Mode 1

Used by military air traffic control to determine the type of aircraft or its mission.

Mode 2

Used by military to specify the aircraft�s identification number (usually displayed on its tail)

Mode 3/A

Used by civilian and military aircraft internationally to uniquely identify aircraft under positive control by air traffic control towers at airports.

Mode 4

Encoded (Encrypted) signal used by the military to differentiate friendly aircraft from everyone else.

Mode C

Used by civilian and military aircraft to report their altitude.

 

                                                                                                       Table 18.        Five Categories for the IFF Systems.

 

Due to classification restrictions, actual IFF system operational models could not be used so a simplified model was created to achieve similar qualitative performance.This model is displayed in the table below:

 

IFF Signal Detection Ranges

Probability of Detection

0 � 25 nautical miles

0.99

26-50 nautical miles

0.95

51-100 nautical miles

0.85

101-150 nautical miles

0.70

151-200 nautical miles

0.50

201-250 nautical miles

0.35

> 250 nautical miles

0.25

 

                                                                                 Table 19.        Abstracted Model IFF System Operational Model.

 

5.�������� Link 11 (TADIL A)/Link 16 (TADIL J) System

Link 11 and Link 16 are the primary means by which ships and aircraft transmit their air and surface picture to other units and overall situational awareness is maintained by the battle group commander and air-defense commander.Although Link 16 is the newer and more modern system, they both are tactical data-exchange systems.A diagram of Link 16 architecture is displayed below in Figure 24.One of the primary differences between the two systems is that Link 11 requires a Net Control Station (NCS) to centrally control the performance and operation of the data link.

 

                                                                                                                                   Figure 24.                   Link 16 Example.

 

Link 11 used poll-response where each platform was directed by the NCS to transmit its information, and once a polling cycle was complete, the NCS transmitted the updated battlespace picture to everyone.Link 16 allowed all participant units to transmit simultaneously.Link 16 has some additional improvements including communications-jamming resistance and increased data rate.Link 11/16 operations in the ADC Simulation were simplified so that only aircraft contacts at a range greater than seventy (70) nautical miles from the AEGIS cruiser would be inputted into the Link for the TIC watchstander to evaluate.

6.�������� External Communications System

The External Communications System handles the voice communications between the AEGIS cruiser and other aircraft and ships.There are typically two types of voice communications, satellite and standard radio.In the simulation, only the standard radio communication is implemented to simplify matters.

7.�������� Vertical Launching System (Surface-to-Air Missiles)

The Vertical Launching System (VLS) stores as well as launches various types of missiles located in the magazines.The VLS is capable of launching surface-to-air missiles (Standard Missile), Tomahawk Cruise Missiles, and Vertical Launched Antisubmarine Rockets (VLA).In the ADC Simulation, only the surface-to-air missiles are implemented; they are assigned a 0.70 probability of intercepting their target and have a range of eighty nautical miles.Only two missiles are launched against a target at any time.If the missiles fail to intercept their target, two additional missiles will be fired.

8.�������� Close-In Weapons System (CIWS)

The Close-In-Weapons System (CIWS), also known as the Phalanx, is the twenty-millimeter shipboard self-defense system that contains its own radar and fire control system.The CIWS was designed as the last defense against incoming anti-ship missiles and fires at an incredible rate of 3000-4500 rounds per minute.The CIWS can be operated in several different modes from completely automated to manual targeting and firing.In the simulation, the CIWS has a range of one nautical mile, is assigned a 0.50 probability of hitting its target, and has a range of one nautical mile.

I.��������� simulation log records AnD event reconstruction

1.�������� Overview

The ADC Simulation records every event that occurs within a scenario so that it can be later reviewed and analyzed by the user.Four categories of record logs are maintained along with a fifth type that allows the user to more finely search the other logs.They are the Scenario Log, Watchstander Decision History Log, CIC Equipment Status Log, the Watchstander Performance Log, and the Parse/Analyzer Log.Each of these records events in chronological order, and events can originate with the watchstander or reports initiated by other watchstanders (or equipment).

2.�������� Scenario Events Log

The Scenario Events Log maintains a high-level record of all events within a scenario.These events help the user to form a broad understanding of the simulated CIC team�s overall perception of the battle space.An example Scenario Events Log is displayed below in Figure 25.

                                                                                                                             Figure 25.                   Scenario Events Log.

 

3.�������� Watchstander Decision History Log

The Watchstander Decision History Log provides a detailed account of an individual watchstander agent�s actions and inputs for later review.Each watchstander in the CIC team has a separate decision history log to record events significant to that watchstation.An example of a Decision History Log is displayed below in Figure 26.

 

                                                                                                       Figure 26.                   Watchstander Decision History Log.

 

 

4.�������� CIC Equipment Readiness Log

The CIC Equipment Readiness Log provides a detailed account of combat systems operational performance for later review.Each piece of CIC equipment has a separate decision history log to record events significant to that watchstation.An example of a CIC Equipment Readiness Log is displayed in Figure 27.

 

                                                                                                              Figure 27.                   CIC Equipment Readiness Log.

 

4.�������� Watchstander Performance Log

The Watchstander Performance Log maintains the overall performance metrics for each watchstander as well as the overall CIC team performance.An example of the Watchstander Performance Log is displayed below in Figure 28.

 

                                                                                                            Figure 28.                   Watchstander Performance Log.

 

To track the performance of the CIC team, the values of average initial detection time of aircraft, average initial classification time of aircraft, and average correct classification time of aircraft are calculated.For the individual watchstanders, the metrics for number of errors, number of total actions attempted by the watchstander, percentage of errors in attempted actions, average of primary watchstander task time, and the communications time (how long to transmit a message) average are maintained.

5.�������� Parser/Analyzer Log

In the ADC Simulation, the Parse/Analyzer Log is a tool to extract desired information about aircraft contacts.The Parse/Analyzer accepts as an input the track number of an aircraft and then searches through all logs to retrieve the log entries about that aircraft.An example of the Parser/Analyzer Log is displayed in Figure 29.

                                                                                                                              Figure 29.                   Parser/Analyzer Log.

 

J.�������� adc simulation external/environmental attributes

1.�������� Overview

The ADC Simulation allows the user to modify some of the external attributes of the simulation.These attributes influence the CIC watch team by adding external stresses and overall workload to their situation as the levels of the selected options increase.

2.�������� Atmosphere/Weather

There are three options for the weather conditions attribute, clear weather, heavy rain, and heavy clutter environment.The primary effect the weather attributes has on a scenario is to detection and communications systems such as the SPY-1B radar, SLQ-32 System, IFF System, and Link 11/Link16 System.The heavy clutter option causes a ten percent penalty reduction to probability of successful detection value for the above systems.

                                                                                                                Figure 30.                   Weather Conditions Window.

 

3.�������� Contact Density

The Contact Density attribute controls the number of aircraft contacts (neutral and hostile) that are inputted into a scenario.The number of contacts arriving in the simulation directly influences the workload of the watchstanders in the CIC.

 

                                                                                                                      Figure 31.                   Contact Density Window.

 

4.�������� Scenario Threat Level

The Scenario Threat Level attribute directly influences several other areas in a scenario.First, this attribute affects the parameters used by the CIC watch team (Force TAO and Force AAWC specifically) to classify aircraft contacts.The higher the threat level, the more likely the team is to classify aircraft as Suspect or Hostile.The second area influenced is the aggressiveness level of the hostile contacts.In Scenario Threat Level White, there is a low probability of attack for the hostile aircraft, but as the threat level increases, the probability of their attack will rise accordingly.

                                                                                                             Figure 32.                   Scenario Threat Level Window.

 

5.�������� Hostile Contact Level

The Hostile Contact Level determines the overall probability of the appearance of Hostile aircraft contacts in the scenario, which ultimately affects the CIC watch team, if these contacts initiate aggressive behaviors.Additionally, the watch team is impacted because the more hostile contacts classified by them, the more resources (time, focus) will be required to maintain track on them, which could lead to distractions from other hostile aircraft.

                                                                                                              Figure 33.                   Hostile Contact Level Window.

 

K.������� adc simulation DOCTRINE attributeS

1. ������� Overview

Within the simulation there are two types of doctrine that the watchstanders follow in the performance of their air-defense duties, AEGIS Doctrine and Air Defense Doctrine.

2. ������� AEGIS Doctrine

AEGIS Doctrine is situational parameters that are predetermined by the CIC Watch team and input into the AEGIS Weapons System to assist them in the performance of their duties.AEGIS Doctrine consists of Auto-Standard Missile Doctrine, Auto-Special (Missile) Doctrine, IFF Doctrine, Identification Doctrine, and Drop-Track Doctrine.The first two are considered weapons doctrine and are used to reduce reaction time and human errors when an airborne contact meets the predefined parameters for a hostile, imminent, and dangerous threat to the ship.Depending on the specifications, these doctrine will either engage the surface-to-air missile systems to point where they are ready to fire (awaiting the human watchstander to push the Fire button) or will actually consummate the engagement if programmed to do so.The weapons doctrine can be of significant assistance to the watch team, especially, if the ship is faced with a potential threat that is rapidly approaching them or detected very late thereby reducing the engagement time.In the simulation, the only AEGIS doctrine that has been implemented was the Auto-Special Doctrine.The input screen for Auto-Special Doctrine is displayed in Figure 34.

 

                                                                                   Figure 34.                   AEGIS (Auto-Special) Doctrine Popup Window.

 

The other type of doctrine, non-weapons doctrine, assists the watch team with managing the air-defense identification process by alleviating the watchstanders of many of the time-consuming tasks of manually evaluating aircraft contacts.The IFF doctrine is designed to automatically interrogate aircraft contacts based on the parameters (ranges, bearings) set by the watch team.The Identification (ID) doctrine uses the parameters selected by the watch team (altitude, speed, course, etc.) to automatically classify certain aircraft tracks that meet the criteria.This doctrine is typically used to attempt to identify neutral aircraft such as commercial airliners which have fairly predictable kinematic attributes.The last type of doctrine is Drop-Track doctrine and is employed by the watch team to manage the number of aircraft tracks displayed on the CIC console screens and processed by both the AEGIS Weapon System and watchstanders.The Drop-Track doctrine assists in heavy clutter environments by eliminating much of the clutter detected by the radar (falsely as real aircraft contacts) before it is displayed on the screen (which would require the watchstanders to waste resources to evaluate).

L.�������� discussion of probability AND skill-time values in adc simulation

Several of the attribute values used in the ADC Simulation were selected through subjective processes (i.e. interviews).A component was incorporated into the simulation to allow the users to modify certain skill-associated maximum task times and attribute probability values for each watchstander.This capability gives the simulation significant flexibility with respect to the accuracy of watchstander performance attributes because as research studies in these areas are completed, the resulting times and values can be used to update the program.

 

                                                                                                 Figure 35.                   Skill Probabilities Modification Window.

 

M.������ air-defense contact identification, threat assessment AND classification in the simulation

Air defense in the ADC Simulation was developed using information from the research discussed and the interviews of the air-defense experts from the ATRC detachment in San Diego.It can be divided into three phases: (1) initial contact detection and information reporting; (2) contact classification; and (3) action response to the specified aircraft contact.The first phase is marked by the detection of an aircraft contact by one of the primary sensory input watches (RSC, EWCO, TIC, IDS, or Red Crown agents), which in turn inform the rest of the simulated CIC team to initiate the evaluation process.The second phase consists of the Force TAO and Force AAWC agents (primary decision-makers) and Ship TAO and Ship AAWC agents, analyzing the information provided by the sensory input and conducting an initial classification of the contact.Phase three starts with the Force TAO and/or Force AAWC agents giving orders to the primary action watchstations (Ship TAO, Ship AAWC, CSC, MSS, and IDS) to perform tasks to either gain more information about the aircraft contact or to engage the contact to protect the battle group.The following tasks apply to phase three:

                    Order IDS to query aircraft contact.

                    Order IDS to conduct warning of aircraft contact.

                    Order Ship AAWC (via Ship TAO) to conduct intercept and visual identification of aircraft contact with friendly aircraft.

                    Order Ship AAWC (via Ship TAO) to conduct intercept and engagement of aircraft contact with friendly aircraft.

                    Order MSS (via Ship TAO and Ship AAWC) to conduct intercept and engagement of aircraft contact with cruiser�s surface-to-air missiles.

The Ship TAO agent can also order the last two actions if there is a perceived imminent danger to the ship.

An area of particular interest is the contact detection and information collection by the key sensory watch agents.Similar to the performance in an actual CIC, when a watchstander agent detects a new contact not previously examined by any other agents, it will pass a message alert to the CIC.This message cues the other sensory watch agents (upon completion of their currently task) to focus their attention on the new aircraft contact to gather relevant information.The result of the collaboration is a level of synchronization that mirrors actual CIC behavior and delivers vital information about the new contact to the primary decision-maker watches very quickly.The process is displayed in Figure 36.

 

                                         Figure 36.                   Watchstander Agent Collaborative Contact Detection and Reporting Process.

Most watchstander agents in the simulation must evaluate aircraft contacts continuously as part of their duties, which requires a prioritization similar to those used by an actual CIC team.Below are tables showing the prioritization criteria for each.There are four criteria available: new contact, closest contact to the cruiser / carrier, contact approaching the cruiser / carrier, and longest period of time since the last look.Not all watchstander agents in the simulation use the same criteria.Certain watchstations only require (and can only support) a few criteria to prioritize the selection process appropriately while others need all of them to perform realistically.Additionally, during the contact selection and evaluation process, the watchstander agents were implemented so they will periodically reevaluate aircraft contacts, which is a behavior consistent with observations of actual CIC teams.

 

 

Priority #

New Contact

(Not Analyzed)

Closest Contact to Cruiser/CVN

Contact Closing Cruiser/CVN

Longest Period Since Last Look

1

X

X

X

 

2

X

X

 

 

3

X

 

X

 

4

X

 

 

 

5

 

X

X

X

6

 

X

 

X

7

 

 

X

X

 

                                                                                Table 20.        Force TAO Contact Selection Prioritization Criteria.

 

 

Priority #

New Contact

(Not Analyzed)

Closest Contact to Cruiser/CVN

Contact Closing Cruiser/CVN

Longest Period Since Last Look

1

X

X

X

 

2

 

X

X

X

3

X

 

X

 

4

 

 

X

X

5

X

X

 

 

6

X

 

 

 

7

 

X

 

X

 

                                                                            Table 21.        Force AAWC Contact Selection Prioritization Criteria.

 

 

Priority #

New Contact

(Not Analyzed)

Closest Contact to Cruiser/CVN

Contact Closing Cruiser/CVN

Longest Period Since Last Look

1

X

X

X

 

2

X

X

 

 

3

X

 

X

 

4

X

 

 

 

5

 

X

X

X

6

 

X

 

X

7

 

 

X

X

 

                                                                                  Table 22.        Ship TAO Contact Selection Prioritization Criteria.

 

Priority #

New Contact

(Not Analyzed)

Closest Contact to Cruiser/CVN

Contact Closing Cruiser/CVN

Longest Period Since Last Look

1

X

X

X

 

2

X

X

 

 

3

X

 

X

 

4

X

 

 

 

5

 

X

X

X

6

 

X

 

X

7

 

 

X

X

 

                                                                                          Table 23.        RSC Contact Selection Prioritization Criteria.

 

Priority #

New Contact

(Not Analyzed)

Closest Contact to Cruiser/CVN

Contact Closing Cruiser/CVN

Longest Period Since Last Look

1

X

 

 

X

2

X

 

 

 

3

 

 

 

X

 

                                                                                      Table 24.        EWCO Contact Selection Prioritization Criteria.

 

Priority #

New Contact

(Not Analyzed)

Closest Contact to Cruiser/CVN

Contact Closing Cruiser/CVN

Longest Period Since Last Look

1

X

X

X

 

2

X

X

 

 

3

X

 

X

 

4

X

 

 

 

5

 

X

X

X

6

 

X

 

X

7

 

 

X

X

 

                                                                                            Table 25.        IDS Contact Selection Prioritization Criteria.

 

Priority #

New Contact

(Not Analyzed)

Closest Contact to Cruiser/CVN

Contact Closing Cruiser/CVN

Longest Period Since Last Look

1

X

 

X

X

2

X

 

 

X

3

X

 

 

 

4

 

 

 

X

 

                                                                                            Table 26.        TIC Contact Selection Prioritization Criteria.

 

Priority #

New Contact

(Not Analyzed)

Closest Contact to Cruiser/CVN

Contact Closing Cruiser/CVN

Longest Period Since Last Look

1

X

X

X

 

2

X

X

 

 

3

X

 

X

 

4

X

 

 

 

5

 

X

X

X

6

 

X

 

X

7

 

 

X

X

 

                                                                                Table 27.        Red Crown Contact Selection Prioritization Criteria.

 

                                                                                                  Figure 37.                   Generic Air Contact Classification Path.

 

N.������� air-defense decision-making:inside the heads of the f-tao AND f-aawc watchstander agents

In an actual AEGIS cruiser CIC, the Force TAO and Force AAWC watchstanders (as well as the Ship TAO and Ship AAWC) are responsible for making classification decisions about the aircraft contacts evaluated by the watch team, and the same pattern is used in the ADC Simulation.These watchstanders are presented with a variety of data about the aircraft contacts from various sources, known as cueing information, that they use to conduct the assessments.Such assessments occur in the form of templates that indicate the expected behaviors of the aircraft contacts and are categorized as Hostile, Suspect, Neutral, Unknown, and Friend.The evaluation input cues used by the simulated watchstanders are listed below.

 

#

Contact Input Cues/Factors Category

1

Altitude

2

Speed

3

Radar Electronic Signal

4

Course

5

Point of Origin

6

IFF Mode

7

Query/Warning Response

 

                                                                                  Table 28.        Evaluation Input Cues Used by the Watchstanders.

 

To implement the cognitive and decision-making aspects of the watchstander air-defense classification, an artificial neuron configuration was used.Artificial neurons (often incorporated into networks called neural networks) can closely parallel those processes in humans.A neuron consists of:

                    Input values - These data may come from the environment or the activation of other neurons.

                    Real-valued weights - The weights are used to describe connection strengths.

                    An activation level � The neuron�s activation level is determined by the cumulative strength of its input signals where each input is scaled by the connection weight along the input line.The activation level is thus computed by taking the sum of the scaled inputs.

                    A threshold function. � This computes the neuron�s final or output state by determining how far the neuron�s activation level is below or above some threshold value.[56]

Displayed in Figure 38 is the design of the artificial neuron for the watchstander agent classification component.

 

                                                                                                  Figure 38.                   Contact Classification Artificial Neuron.

 

Each input value has associated weights determined by the accuracy and �convincing strength� of the attribute.The convincing strength refers to the believability or persuasiveness of the input value as a function of its source.For example, in the ADC Simulation, if conflicting information were received from two different attributes such as the radar electronic signal (ES) and IFF mode-one input cues, the radar ES cue would have a higher convincing strength because it is verifiable by the cruiser�s own sensor equipment and is very difficult to fake.Conversely, the IFF mode values can be modified by the pilot of an aircraft and this opens them to use for deception.

Classifications (Hostile, Suspect, Neutral, Unknown, and Friend) are differentiated using threshold values that are dependent on the current scenario threat level.As the threat level is changed, these threshold values change.The default threshold values are displayed below.

 

 

 

 

 

 

Contact Classification

Threat Level White Thresholds

Threat Level Yellow Thresholds

Threat Level Red Thresholds

Hostile

≥ +600

≥ +500

≥ +450

Suspect

500 � +599

+450 � +499

+400 � +449

Neutral

400 � +499

+300 � +449

+200 � +399

Unknown

-399 � +399

-399 � +301

-399 � 199

Friend

≤ -400

≤ -400

≤ -400

 

                                                                                                   Table 29.        Default Classification Threshold Values.

 

To classify aircraft, the appropriate simulated watchstanders evaluate all attribute/input cue information about the contact and sum the relevant values.The contact is then categorized based on which threshold bin into which the value falls.The scoring (weighted) values for the various input cues are displayed below.

 

#

Contact Input Cues/Factors Category

Contact Input Cues/Factors

Score Value

(Weight)

1

Altitude

Very High

+20

2

Altitude

High

+40

3

Altitude

Medium

+60

4

Altitude

Low

+80

5

Altitude

Very Low

+100

6

Speed

Very Fast

+100

7

Speed

Fast

+80

8

Speed

Medium

+60

9

Speed

Slow

+50

10

Speed

Very Slow

+40

11

Radar Electronic Signal

Hostile Fire Control Radar

+800

12

Radar Electronic Signal

Hostile Aircraft Radar

+300

13

Radar Electronic Signal

Unknown Aircraft Radar

+50

14

Radar Electronic Signal

Neutral Aircraft Radar

+80

15

Radar Electronic Signal

Friendly Aircraft Radar

-400

16

Course

Closing/Approaching Cruiser

+50

17

Course

Opening/Departing Cruiser

0

18

Point of Origin

Hostile Point of Origin

+100

19

Point of Origin

Unknown Point of Origin

+80

20

Point of Origin

Neutral Point of Origin

+50

21

Point of Origin

Friendly Point of Origin

-100

22

IFF Mode

IFF Mode 1

+50

23

IFF Mode

IFF Mode 2

+50

24

IFF Mode

IFF Mode 3

+50

25

IFF Mode

IFF Mode 1 � Friend Codes

-50

26

IFF Mode

IFF Mode 2 � Friend Codes

-50

27

IFF Mode

IFF Mode 3 � Friend Codes

-50

28

IFF Mode

IFF Mode 4 - Friend

+600

29

IFF Mode

IFF Mode 4 � None

+50

30

IFF Mode

IFF Mode C

+50

31

Query/Warning Response

Favorable Response

-50

32

Query/Warning Response

No Response

25

33

Query/Warning Response

Unfavorable Response

100

 

                                                                            Table 30.        Scoring (Weighted) Values for the Various Input Cues.

 

Contacts have the highest probability of being categorized as unknown because often only partial information is available about the aircraft.As more of the input cueing data becomes available, the aircraft�s classification will move towards neutral, suspect, or hostile assessments.The most difficult and infrequent classifications are hostile and friend.These require either a tremendous preponderance of the necessary input cues of lesser convincing strength or a few cues of significant persuasiveness to attain them.

Upon implementation of an artificial neuron, an extensive testing period is typically necessary to train the component to produce the optimum output, which in this case would be the more reliable and accurate classification results.Only a moderate level of neuron training was conducted due to time constraints.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

THIS PAGE INTENTIONALLY LEFT BLANK

V.����� research question results AND EVALUATION OF THE SIMULATION

A.������� research question Introduction

1.�������� Overview

Using the research questions discussed in Chapter One as a guide, five sets of specific questions were selected as the focus of parametric testing and analysis of the ADC Simulation.These question sets explored the influence of watchstander and scenario attributes on the performance of the RSC Agent, EWCO Agent, and Force TAO Agent individually and the CIC team collectively.For four sets of questions relating to the RSC, EWCO, Force TAO agents and scenario weather, a single attribute was modified at a time (for just the single watchstander or scenario attribute) while the rest of the watchstander and scenario attributes were fixed.Two separate CIC team-skill, experience, and fatigue attribute profiles were created and tested to determine the differences in performance.

2.�������� Testing Methodology

a.������ Scenario Default Settings

Unless otherwise indicated, the following settings were used in the tests:

 

1) Watchstanders

����������������������� - Skill levels:�������������������������� Experienced

����������������������� - Experience level:����������������� Experienced

����������������������� - Fatigue level:������������� Fully Rested

����������������������� - Decision-maker Type:��������� Balanced

 

����������������������� 2) CIC Equipment�������������������������������

- Readiness level:��������������������� Fully Operational

 

����������������������� 3) External Environment

����������������������� - Contact Density:�������������������� Medium

����������������������� - Threat Level:������������������������� White

����������������������� - Hostile Contact Number:������� Low

����������������������� - Weather:������������������������������� Clear

 

 

 

 

 

b.�������� Number of Runs

For each attribute (or set of attributes) modified, ten scenario runs were conducted, which resulted in 170 individual tests divided into the following categories:

                    40 runs for the RSC agent testing

                    40 runs for the EWCO agent testing

                    40 runs for the F-TAO agent testing

                    20 runs for the CIC Watch Team testing

                    30 runs for the Scenario weather testing

c.�������� Limitation of Variability in Testing

Consistent performance across multiple evaluation runs during parametric testing is to the validity of the results, and consequently, some modifications to the simulation were necessary.To limit the effect of non-essential variables in the scenarios that could affect the results of the testing, the following aspects of the simulation were fixed to ensure consistent performance:

                    Kinematic attributes of aircraft contacts (course, speed, altitude)

                    Designated aircraft contact starting locations (4 points selected)

                    All aircraft created at a specified point having the same destination point

                    Finite number of aircraft contacts for each test run (50 contacts)

                    Generation of new contacts disallowed

                    Defensive measures by AEGIS Cruiser (i.e. launching missiles) disallowed

                    All contacts created as neutral aircraft (commercial airlines) having ES and IFF attributes

3.�������� Philosophy of Testing and Data Results Analysis

Before reviewing the data results from the testing, it is necessary to discuss the philosophy underpinning the testing.Sometimes simulations are considered to be exact replicas of the systems they represent and consequently, the results are viewed with a level of trust not commensurate with the underlying model design.This belief often leads to the mistaken conclusion that simulations (and the results/outputs they produce) can be used in lieu of actual testing, which is typically more difficult and costly to perform, and spurs the impetus towards the development of simulations in the first place.Such views can be dangerous since, if not placed in the proper context, simulations could lead their users to incorrect assessments.

Except for the most basic cases, every simulation represents a complex system through a simplified model that attempts to recreate the qualitative behavior, output, and performance of its real world counterpart.It is this simplification aspect that must be emphasized to the user so it is understood the simulation does not produce clear-cut results which can be immediately applied to actual problems.Viewed in its proper context, the combination of a simulation model and careful analysis of the results produces potential insight into the understanding of a complex system�s dynamics and internal relationships.However, this insight must never be confused with comprehensively finding the causes of real world events.Simulation results can best be used to provide guidance in determining possible areas of continued exploration that must be pursued through empirical testing in the real world to ensure accurate validation.

4.�������� Philosophy of the Use of the ADC Simulation

Since most of the characteristics of the individual watchstander agents were developed using expected-behavior models (and are implemented as functions), the test results from these areas should be examined with an eye towards verifying their proper performance.The results of the simulation scenarios can then be used for guidance and inference into areas that will require real world testing to validate.The ADC Simulation as well as other agent-based simulation models can offer a way to conduct scientific analysis of complex systems such as the inputs, outputs, and interactions of a CIC team.As the model becomes more refined through the inclusion of additional realistic detailed information and validation with real world performance, the simulation can be used with a greater sense of trust in its results.

5.�������� Simulation Testing Input Settings and Measurements Lists

a.�������� Inputs and Functions

The following aspects of the ADC Simulation were implemented.These aspects govern many of the fundamental performance characteristics of the AEGIS cruiser, aircraft contacts, combat systems equipment, and watchstander agents.

 

 

                    Watchstander agents

                    CIC equipment

                    The external scenario environment

                    Aircraft contact behaviors and performance

b.�������� Independent Variables

The following areas of the ADC Simulation are independent variables that can be modified/set by the user before and during the scenarios:

                    Watchstander agent attribute levels (skill, experience, etc.)

                    CIC equipment readiness levels

                    External scenario attribute levels/options

                    AEGIS doctrine settings

                    Attribute probability values

                    Maximum task times

c.�������� Dependent Variables

The following aspects and relationships in the ADC Simulation are dependent variables:

                    Performance relationships among the watchstanders

                    Aircraft contact classifications by the CIC team

                    Subsequent CIC team actions based on the classifications

                    Overall performance of the CIC team in air-defense duties

d.�������� Test Categories

                    Watchstander (W/s) Average Task Time: the average amount of time for the watchstander to complete tasks during a test run.

                    Watchstander Average Message (Msg.) Transmit Time: the average amount of time for the watchstander to transmit messages during a run.

                    Watchstander Task Error Percentage: the percentage of errors during the attempted completion of watchstander tasks out of the total number of attempts to perform a task.

                    CIC Average Initial Detection Radar Time: the average time for between the time contacts are created and the time they are detected by the AEGIS cruiser (SPY-1B radar and CIC team).

                    CIC Average Initial Classification Time: the average time between the time contacts are initially detected and the time they are initially classified by the CIC team.

                    CIC Average Correct Classification Time: the average time between the time contacts are initially detected and the time they are correctly classified by the CIC team.

                    CIC Classification Error Percentage: the percentage of aircraft contact classification errors committed by the CIC team out of the total number of attempts.

                    Average Number of Attempted CIC Classifications: the average of the number of attempted CIC classifications performed by the CIC team during a test run.

B.������� radar systems controller (rsc) agent testing and analysis results

1.�������� Expected Results Based on Air-Defense Expert Interviews

It was expected that as the skill attributes were increased from basic to expert, the watchstander task error percentage and the CIC team classification error percentage would decrease.The increase in the experience attributes from newly qualified to expert was expected to decrease in the RSC agent�s task and communication times as well as decrease CIC team times, especially the average initial radar detection time.The increase in the fatigue attributes from fully rested to exhausted was expected to produce an increase in the agents' task error percentage and the CIC-team classification error percentage.Lastly, the modification of the SPY-1B radar readiness attributes from fully operational to highly degraded was expected to increase both watchstander times (task and message transmission) and CIC-team times.

2.�������� Results from the Simulation (See Appendix C Section A for Graphs)

 

Radar Operations Skill

W/s Avg. Task Time

W/s Avg. Msg. Transmit Time

W/s Task Error Percentage

Basic

3.76

1.16

17.40%

Experienced

3.78

1.19

13.40%

Expert

3.67

1.16

21.20%

 

Radar Operations Skill

CIC Avg. Initial Radar Detect Time

CIC Avg. Initial Classif. Time

CIC Avg. Correct Classif. Time

Classification Error Percentage

Avg. # of Attempted CIC Classifications

Basic

150.80

103.83

111.54

8.10%

21.00

Experienced

136.39

108.89

111.77

9.92%

24.20

Expert

134.90

109.47

103.82

11.36%

19.56

 

                                                                                                                  Table 31.        Radar Operations Skill Tests.

 

Experience Level

W/s Avg. Task Time

W/s Avg. Msg. Transmit Time

W/s Task Error Percentage

Newly Qualified

3.58

1.20

19.60%

Experienced

3.45

1.27

19.20%

Expert

3.64

1.24

22.20%

 

Experience Level

CIC Avg. Initial Radar Detect Time

CIC Avg. Initial Classif. Time

CIC Avg. Correct Classif. Time

Classification Error Percentage

Avg. # of Attempted CIC Classifications

Newly Qualified

153.29

106.94

113.32

10.18%

22.60

Experienced

145.61

105.08

129.00

7.29%

24.70

Expert

166.94

102.09

82.26

15.63%

22.40

 

                                                                                                                           Table 32.        Experience Level Tests.

 

Fatigue Level

W/s Avg. Task Time

W/s Avg. Msg. Transmit Time

W/s Task Error Percentage

Fully Rested

3.58

1.20

19.60%

Tired

3.45

1.27

19.20%

Exhausted

3.64

1.24

22.20%

 

Fatigue Level

CIC Avg. Initial Radar Detect Time

CIC Avg. Initial Classif. Time

CIC Avg. Correct Classif. Time

Classification Error Percentage

Avg. # of Attempted CIC Classifications

Fully Rested

153.29

106.94

113.32

9.09%

24.20

Tired

145.51

105.08

129.00

11.71%

20.50

Exhausted

166.94

102.09

82.26

12.83%

22.60

 

                                                                                                                                Table 33.        Fatigue Level Tests.

 

SPY-1B Radar Operational Readiness

W/s Avg. Task Time

W/s Avg. Msg. Transmit Time

W/s Task Error Percentage

Fully Operational

3.55

1.27

21.00%

Partially Degraded

3.67

1.09

18.89%

Highly Degraded

3.82

1.28

20.20%

 

SPY-1B Radar Operational Readiness

CIC Avg. Initial Radar Detect Time

CIC Avg. Initial Classif. Time

CIC Avg. Correct Classif. Time

Classification Error Percentage

Avg. # of Attempted CIC Classifications

Fully Operational

142.27

105.45

86.37

10.68%

20.60

Partially Degraded

143.37

105.43

107.66

11.62%

22.00

Highly Degraded

148.72

107.69

91.00

11.17%

20.60

 

                                                                                                                             Table 34.        SPY-1B Radar Tests.

 

3. ������� Analysis of Results and Recommendations

a.�������� Radar Operations Skill Results

There was a moderate decrease in the watchstander error percentage between the basic and experienced levels, but an unexpected rise for the expert attribute.The CIC team classification error percentage did not produce a significant trend data and for the expert attribute, the error percentage rose moderately.Also, for the average number of attempted classifications, there was an increase between the basic and experienced level, and a drop at the expert level.There were no readily apparent reasons for these trends so additional tests are recommended to determine whether trends will develop over a larger number of tests.For the CIC test categories that did not produce trends, there is a possibility that the individual watchstander�s skill level had a minimal influence on the overall CIC performance and possibly was overcome by one or more other variables in the test scenario (i.e. other watchstanders).

b.�������� Experience Level Results

With the exception of a minor decrease in the CIC initial classification time category, there were no indications of trends in the rest of the data.There is a possibility that the individual watchstander�s experience level had a minimal influence on the overall CIC performance and possibly was overcome by one or more other variables in the test scenario (i.e. other watchstanders).

c.�������� Fatigue Level Results

There was a minor increase in watchstander errors as fatigue levels increased, which also corresponded to an increase in the CIC classification error rate (although an insignificant increase between newly qualified and experienced).For the CIC test categories that did not produce trends, there is a possibility that the individual watchstander�s fatigue level had a minimal influence on the overall CIC performance and possibly was overcome by one or more other variables in the test scenario (i.e. other watchstanders).

d.�������� SPY-1B Radar Results

There was a very minor increase in the watchstander average task time as radar became more degraded, and a moderate increase in times for the CIC average initial detection time and initial classification time (although a very small increase between fully operational and partially degraded).For the CIC test categories that did not produce trends, there is a possibility that the radar status had a minimal influence on the overall CIC performance and possibly was overcome by one or more other variables in the test scenario (i.e. other watchstanders).

C.������� electronic warfare control officer (ewco) agent testing and analysis results

1.�������� Expected Results Based on Air-Defense Expert Interviews

It was expected that as the skill attributes were increased from basic to expert, the watchstander task error percentage and the CIC team classification error percentage would decrease.The increase in the experience attributes from newly qualified to expert was expected to decrease in the EWCO agent�s task and communication times as well as decrease CIC team times, especially the average initial radar detection time.The increase in the fatigue attributes from fully rested to exhausted was expected to produce an increase in the agents' task error percentage and the CIC-team classification error percentage.Lastly, the modification of the SLQ-32 system readiness attributes from fully operational to highly degraded was expected to increase both watchstander times (task and message transmission) and CIC team times.

2.�������� Results from the Simulation (See Appendix C Section A for Graphs)

 

ES Analysis Skill

W/s Avg. Task Time

W/s Avg. Msg. Transmit Time

W/s Task Error Percentage

Basic

3.79

1.14

22.20%

Experienced

3.78

1.14

15.60%

Expert

3.61

1.16

19.60%

 

ES Analysis Skill

CIC Avg. Initial Radar Detect Time

CIC Avg. Initial Classif. Time

CIC Avg. Correct Classif. Time

Classification Error Percentage

Avg. # of Attempted CIC Classifications

Basic

139.06

128.94

117.71

8.14%

25.80

Experienced

140.99

129.59

95.03

7.22%

29.10

Expert

141.62

126.75

131.11

5.82%

29.20

 

                                                                                               Table 35.        Electronic Signal (ES) Analysis Skill Tests.

 

 

 

 

 

Experience Level

W/s Avg. Task Time

W/s Avg. Msg. Transmit Time

W/s Task Error Percentage

Newly Qualified

3.27

1.22

23.40%

Experienced

3.69

1.34

19.20%

Expert

3.72

1.24

16.60%

 

Experience Level

CIC Avg. Initial Radar Detect Time

CIC Avg. Initial Classif. Time

CIC Avg. Correct Classif. Time

Classification Error Percentage

Avg. # of Attempted CIC Classifications

Newly Qualified

142.29

119.29

132.76

15.56%

25.70

Experienced

141.96

126.22

146.77

9.51%

26.30

Expert

132.42

130.93

112.50

9.65%

25.90

 

                                                                                                                           Table 36.        Experience Level Tests.

 

Fatigue Level

W/s Avg. Task Time

W/s Avg. Msg. Transmit Time

W/s Task Error Percentage

Fully Rested

3.78

1.18

15.00%

Tired

3.64

1.13

22.80%

Exhausted

3.59

1.14

18.00%

 

Fatigue Level

CIC Avg. Initial Radar Detect Time

CIC Avg. Initial Classif. Time

CIC Avg. Correct Classif. Time

Classification Error Percentage

Avg. # of Attempted CIC Classifications

Fully Rested

142.01

125.58

135.13

9.12%

28.50

Tired

140.10

121.74

120.93

8.96%

26.80

Exhausted

154.57

114.31

122.08

11.20%

25.90

 

                                                                                                                                Table 37.        Fatigue Level Tests.

SLQ-32 System Operational Readiness

W/s Avg. Task Time

W/s Avg. Msg. Transmit Time

W/s Task Error Percentage

Fully Operational

3.76

1.11

20.60%

Partially Degraded

3.78

1.08

18.40%

Highly Degraded

3.71

1.19

21.40%

 

SLQ-32 System Operational Readiness

CIC Avg. Initial Radar Detect Time

CIC Avg. Initial Classif. Time

CIC Avg. Correct Classif. Time

Classification Error Percentage

Avg. # of Attempted CIC Classifications

Fully Operational

162.37

127.67

124.52

10.73%

26.10

Partially Degraded

188.06

131.43

145.40

13.65%

24.90

Highly Degraded

166.80

153.45

150.10

12.45 %

26.50

 

                                                                                                                 Table 38.        SLQ-32 System Radar Tests.

 

 

3. ������� Analysis of Results and Recommendations

a.�������� ES Analysis Skill Results

A decreasing trend was observed in the watchstander error percentage between the basic and experienced levels, but an unexpected rise between the experienced and expert attribute, although the error percentage at the expert level was still lower than the basic level.There was no explanation for this occurrence so additional tests are recommended to determine whether a steadily decreasing trend will develop over a larger number of tests.The CIC team classification error percentage produced a decreasing trend from basic to expert.Also, the average number of attempted classifications had a steady increase, although the rise between the experienced and expert level may not be statistically significant.Additional tests would also help to determine whether a more significant trend will develop.For the CIC test categories that did not produce trends, there is a possibility that the individual watchstander�s ES Analysis skill level had a minimal influence on the overall CIC performance and possibly was overcome by one or more other variables in the test scenario (i.e. other watchstanders).Empirical tests are needed to confirm the validity of the results.

b.�������� Experience Level Results

Contrary to expectations, an increasing trend was observed in the watchstander average task time as the experience level was modified from newly qualified to expert.There was a notable decrease in the watchstander task error percentage and CIC classification error percentage as the experience level increased, but there were no other trends observable in the rest of the test times.

c.�������� Fatigue Level Results

Contrary to expectations, there was a there was a minor increase in the watchstander average task time as the fatigue levels increased.The fully rested watchstander agent test may have consisted of a larger set of tasks times that, due to inherent (and proper) variability in the watchstander design, produced a set of data with longer times.

 

 

 

d.�������� SLQ-32 System Results

There was a notable increase in the CIC averaged initial classification and averaged correct classification times as the system readiness transitioned from fully operational to highly degraded.

d.������� force tactical action officer (f-tao) agent testing and analysis results

1.�������� Expected Results Based on Air-Defense Expert Interviews

It was expected that as the skill attributes were increased from basic to expert, the watchstander task error percentage and the CIC team classification error percentage would decrease.The increase in the experience attributes from newly qualified to expert was expected to cause a decrease in the Force TAO agent�s task and communication times as well as a decrease in CIC-team times, especially the average initial radar detection time.The increase in the fatigue attributes from fully rested to exhausted was expected to produce an increase in the agent�s task error percentage and the CIC-team classification error percentage.Lastly, the increase in the decisionmaker-type attributes from cautious to aggressive was expected to decrease the watchstander and CIC times while increasing the CIC classification error percentage.

2.�������� Results from the Simulation (See Appendix III Section A for Graphs)

 

Situation Awareness Skill

W/s Avg. Task Time

W/s Avg. Msg. Transmit Time

CIC Avg. Initial Radar Detect Time

CIC Avg. Initial Classif. Time

CIC Avg. Correct Classif Time

Classification Error Percentage

Basic

15.34

1.34

149.37

275.03

314.26

14.59%

Experienced

14.99

1.23

151.02

248.60

282.22

9.25%

Expert

15.06

1.20

180.37

272.60

317.61

7.75%

 

                                                                                                              Table 39.        Situation Awareness Skill Tests.

 

Experience Level

W/s Avg. Task Time

W/s Avg. Msg. Transmit Time

CIC Avg. Initial Radar Detect Time

CIC Avg. Initial Classif. Time

CIC Avg. Correct Classif Time

Classification Error Percentage

Newly Qualified

14.84

1.20

172.20

246.73

266.59

13.25%

Experienced

14.70

1.18

169.34

257.29

238.71

11.50%

Expert

14.60

1.19

153.17

239.12

213.78

10.50%

 

                                                                                                                           Table 40.        Experience Level Tests.

 

 

 

Fatigue Level

W/s Avg. Task Time

W/s Avg. Msg. Transmit Time

CIC Avg. Initial Radar Detect Time

CIC Avg. Initial Classif. Time

CIC Avg. Correct Classif Time

Classification Error Percentage

Fully Rested

14.63

1.24

175.99

250.99

269.55

7.00%

Tired

15.26

1.30

165.03

257.06

275.95

12.00%

Exhausted

14.81

1.23

158.29

257.46

233.34

14.00%

 

                                                                                                                                Table 41.        Fatigue Level Tests.

 

Decision-maker Type

W/s Avg. Task Time

W/s Avg. Msg. Transmit Time

CIC Avg. Initial Radar Detect Time

CIC Avg. Initial Classif. Time

CIC Avg. Correct Classif Time

Classification Error Percentage

Cautious

15.31

1.11

160.55

288.95

306.77

14.00%

Balanced

14.79

1.31

171.32

245.23

272.84

12.25%

Aggressive

14.32

1.13

171.87

228.05

300.90

12.00%

 

                                                                                                                    Table 42.        Decision-maker Type Tests.

 

3. ������� Analysis of Results and Recommendations

a.�������� Situation Analysis Skill Results

As expected, there was a noticeable decline in the CIC aircraft classification error percentage as the skill level increased from basic to expert.However, an increase in the CIC averaged initial radar detection time was observed, and there was no explanation for this occurrence so additional tests are recommended to determine whether a steadily decreasing trend will develop over a larger number of tests in this area.For these categories, there is a possibility that the individual watchstander�s fatigue level had a minimal influence on the overall CIC performance and possibly was overcome by one or more other variables in the test scenario (i.e. other watchstanders).

b.�������� Experience Level Results

It was observed that as the experience level increased from newly qualified to expert, there was a corresponding (and in most cases expected) decrease in watchstander averaged task times, watchstander averaged message transmission times (minimal), CIC averaged initial radar detection times, CIC averaged correct classification times, and classification error percentages.These results suggest that for the Force TAO there is a significant effect of the watchstander�s experience level on individual and collective CIC team performance.

 

 

c.�������� Fatigue Level Results

An increase in the classification error percentage was noted as the fatigue level was transitioned from fully rested to exhausted.Also, there was corresponding decrease in the CIC averaged initial radar detection time although a reason for this decline could not be ascertained

d.�������� Decision-Maker Type Results

As expected it was observed that the watchstander averaged task times and CIC averaged initial classification time decreased as the decision-maker type transitioned form cautious to aggressive.It was also noted that the there was a corresponding decline in classification error percentage, which was not expected.There was no explanation for this occurrence so additional tests are recommended to determine whether a steadily decreasing trend will develop over a larger number of tests in this area the validity of the results.

E.�������� combat information center (cic) watch team attribute PROFILE testing and analysis

1.�������� Expected Results Based on Air-Defense Expert Interviews

a.�������� Trial Profile Summary

Trial #1 is a scenario where the Force TAO�s skill and experience attributes are set to expert while the fatigue attribute is set to exhausted.For the rest of the CIC team, their skill and experience attributes are set to basic and newly qualified respectively, while their fatigue attribute is set to fully rested.In Trial #2, the Force TAO�s skill and experience attributes are set to basic and newly qualified respectively, while the fatigue attribute is set to well rested.For the rest of the CIC team, their skill and experiences attributes are set to expert while their fatigue attribute is set to exhausted.The objective of the trials was to gain insight into which Force TAO/CIC team would perform better given the above settings.

b.�������� Expectations

It was expected that the Force TAO / CIC team in Trial #2 would outperform the team in Trial #1 (with respect to the CIC classification error percentage) because the sensory input watchstanders (RSC, TIC, IDS, and EWCO) from Trial #2 would make fewer contact assessment mistakes, which would help the basic/newly qualified Force TAO in overcoming the watchstander�s minimal qualifications.For the individual watchstander results (relating to the Force TAO), it is expected that the Force TAO agent in the first trial (expert attributes) would outperform the Force TAO agent from the second trial.

2.�������� Results from the Simulation (See Appendix C Section A for Graphs)

 

CIC Watch Team Tests

W/s Avg. Task Time (F-TAO)

W/s Avg. Msg. Transmit Time

W/s Task Error Percentage

Trial #1

15.46

1.23

8.30

Trial #2

16.57

2.03

9.50

 

CIC Watch Team Tests

CIC Avg. Initial Radar Detect Time

CIC Avg. Initial Classif. Time

CIC Avg. Correct Classif. Time

Classification Error Percentage

Avg. # of Attempted CIC Classifications

Trial #1

175.92

271.87

274.21

14.59%

56.90

Trial #2

124.86

339.42

349.54

19.79%

48.00

 

                                                                                                Table 43.        CIC Watch Team Attribute Profile Tests.

 

3. ������� Analysis of Results and Recommendations

As expected, the Force TAO agent for the first trial performed better than the Force TAO in the second trial for trial.The only area where the Force TAO and CIC team in the second trial outperformed the other team was in the CIC's averaged initial radar detection time.With respect to the simulation, this suggests that an expert and exhausted Force TAO working with a basic/newly qualified CIC team that is well rested may perform better than the other team.

f.�������� combat information center (cic) watch team testing and analysis of weather options

1.�������� Expected Results Based on Air-Defense Expert Interviews

It was expected that as the weather attributes were increased from clear to heavy clutter, the CIC team times would increase due to degradation of the combat systems sensory equipment (SPY-1B radar, IFF, SLQ-32, Link 11/16) in detecting aircraft contacts.

 

 

 

2.�������� Results from the Simulation (See Appendix C Section A for Graphs)

 

Weather Option Tests

W/s Avg. Task Time

(F-TAO)

W/s Avg. Msg. Transmit Time

Clear Weather

15.03

1.14

Heavy Rain

14.55

1.16

Heavy Clutter

14.42

1.13

 

Weather Option Tests

CIC Avg. Initial Radar Detect Time

CIC Avg. Initial Classif. Time

CIC Avg. Correct Classif. Time

Classification Error Percentage

Avg. # of Attempted CIC Classifications

Clear Weather

194.54

257.30

294.30

15.97%

38.20

Heavy Rain

183.56

270.29

310.11

12.53%

43.10

Heavy Clutter

172.13

260.74

242.47

11.60%

43.10

 

                                                                                                                            Table 44.        Weather Option Tests.

 

3. ������� Analysis of Results and Recommendations

Contrary to expectations, decreasing trends were noted in the watchstander averaged task times (minimal), CIC averaged initial radar detection times, and CIC classification error percentage categories, which suggests that the weather has an opposite effect on the CIC team performance.

g.������� results of the survey of the atrc detachment, san diego air-defense experts

1.�������� Survey Overview

An ADC Simulation realism survey was administered to the nine air-defense experts at the ATRC Detachment in San Diego.Survey questions (organized in sections) were developed based on the results of the testing and analysis of the ADC Simulation to explore the realism of the program�s performance/output as related to the experts� professional air-defense experiences.This method of comparison was selected over the direct observation and assessment of ADC Simulation scenarios to minimize skewed responses due to biases related to the responder desire to evaluate the program differently (because the survey administer was also the simulation developer).Also, we wished to compare the air-defense experts� evaluation of the simulation test results with as few other factors as possible, and some of the results used to formulate the scenario questions were produced using an initial set of simulation tests that would be conducted again.In these cases, the air-defense expert responses can viewed as an expected characteristic or behavior for that particular attribute, which can then be examined in the model and modified if necessary.

The survey required each responder to rate the person�s level of agreement or disagreement with the simulation results based on their professional experience and provided a range of choices from (1)-Strongly Disagree to (7)-Strongly Agree.For each question posed in the survey, there was a subsequent optional question that offered the responder to consider whether the results outcome expressed in the first question (if the person�s experiences were contrary to the results) could occur given a certain set of conditions.Also, for the final set of questions in the survey (Section Five), the second question asked whether the results from the first question were something the experts should be concerned about for training CIC teams in performing air-defense duties.

2.�������� RSC Watchstander Questions and Results

a.�������� Questions Posed

(1) A simulation scenario was run using the default CIC watch team attribute settings, and only the RSC radar operations skill attribute (Basic, Experienced, Expert) was changed.The results showed that the RSC�s radar task performance improved and the number of errors committed errors decreased as the skill level increased (Basic Expert).

(2) A simulation scenario was run using the default CIC watch team attribute settings, and only the RSC experience level attribute (Newly Qualified, Experienced, Expert) was changed.The results showed that the RSC�s watchstander radar task performance improved and the number of errors committed errors decreased as the experience level increased (Newly Qualified Expert).

(3) A simulation scenario was run using the default CIC watch team attribute settings, and only the RSC fatigue level attribute (Fully Rested, Tired, Exhausted) was changed.The scenario results showed that the RSC�s watchstander radar task performance times worsened and the number of errors committed errors increased as the fatigue level increased (Fully Rested Exhausted).

(4) A simulation scenario was run using the default CIC watch team attribute settings, and only the SPY-1B radar system attribute (Fully Operational, Partially Degraded, Highly Degraded) was changed.The scenario results showed that the RSC�s watchstander radar task performance worsened as the equipment readiness level increased (Fully Rested Exhausted).

(5) The scenario results showed that the CIC team performance (Average Classification Times & Number of Errors Committed) improved/decreased as the RSC�s skill and experience levels increased, and worsened as the Fatigue levels increased.

b.�������� Results (See Appendix C Section B for Graphs)

 

RSC Questions

Average

Mean

Std. Dev.

1.Skill Modification

6.11

6

0.93

Contrary Responses

N/A

N/A

N/A

2.Experience Modification

6

6

1.00

Contrary Responses

N/A

N/A

N/A

3.Fatigue Modification

5.33

5

1.32

Contrary Responses

3

3

N/A

4.SPY-1B Radar Modification

5

5

0.71

Contrary Responses

4

4

N/A

5.CIC Team Performance wrt RSC

5

5

1.00

Contrary Responses

4

4

N/A

 

                                                                                                                      Table 45.        Results of RSC Questions.

 

c.�������� Analysis and Recommendations

For the skill and experience questions, the average of the responses was in the strongly-agree category (6.11 and 6 respectively) with no contrary responses.For the fatigue question, most experts agreed with the results of the test runs (5.33 average response) although one person could not think of a set of circumstances that would produce such results (3.0 contrary response).For the SPY-1B radar question, most experts agreed with the results of the test runs (5.0 average response) although one person could think of a set of circumstances that would produce such results (4.0 contrary response) but that they did not strongly agree this situation could reasonably occur.Lastly, the survey response indicated that most experts agreed with the CIC team performance in the test runs (5.0 average response) although one person disagreed (4.0 average).

 

3.�������� EWCO Watchstander Questions and Results

a.�������� Questions Posed

(1) A simulation scenario was run using the default CIC watch team attribute settings, and only the EWCO ES analysis skill attribute (Basic, Experienced, Expert) was changed.The results showed that the EWCO�s radar task performance improved and the number of errors committed errors decreased as the skill level increased (Basic Expert).

(2) A simulation scenario was run using the default CIC watch team attribute settings, and only the EWCO experience level attribute (Newly Qualified, Experienced, Expert) was changed.The results showed that the EWCO�s watchstander radar task performance improved and the number of errors committed errors decreased as the experience level increased (Newly Qualified Expert).

(3) A simulation scenario was run using the default CIC watch team attribute settings, and only the EWCO fatigue level attribute (Fully Rested, Tired, Exhausted) was changed.The results showed that the EWCO�s watchstander radar task performance times worsened and the number of errors increased as the fatigue level increased (Fully Rested Exhausted).

(4) A simulation scenario was run using the default CIC watch team attribute settings, and only the SLQ-32 system attribute (Fully Operational, Partially Degraded, Highly Degraded) was changed.The results showed that the EWCO�s watchstander radar task performance worsened as the equipment readiness level increased (Fully Rested Exhausted).

(5) The results showed that the CIC team performance (Average Classification Times & Number of Errors Committed) improved/decreased as the EWCO�s skill and experience levels increased, and worsened as the Fatigue levels increased.

 

 

 

 

b.�������� Results (See Appendix C Section B for Graphs)

EWCO Questions

Average

Mean

Std. Dev.

1.Skill Modification

6.22

6

0.83

Contrary Responses

5

5

N/A

2.Experience Modification

5.89

6

0.78

Contrary Responses

4

4

N/A

3.Fatigue Modification

5.44

6

1.01

Contrary Responses

5

5

N/A

4.SLQ-32 System Modification

5.44

5

1.13

Contrary Responses

4

4

N/A

5.CIC Team Performance wrt EWCO

5.44

6

1.51

Contrary Responses

4

4

N/A

 

                                                                                                                  Table 46.        Results of EWCO Questions.

 

c.�������� Analysis and Recommendations

For skill, experience, fatigue, SLQ-32 system, and EWCO performance with respect to CIC team questions, average responses were in the agree to strongly agree category (6.22, 5.89, 5.44, 5.44 and 5.44 respectively) with only one contrary response for each question.However, for each of these contrary responses, the person agreed that such a situation could occur.

4.�������� F-TAO Watchstander Questions and Results

a.�������� Questions Posed

(1) A simulation scenario was run using the default CIC watch team attribute settings, and only the Force TAO situation assessment skill attribute (Basic, Experienced, Expert) was changed.The results showed that the Force TAO�s radar-task performance improved and the number of errors committed errors decreased as the skill level increased (Basic Expert).

(2) A simulation scenario was run using the default CIC team attribute settings, and only the Force TAO�s experience level attribute (Newly Qualified, Experienced, Expert) was changed.The results showed that the Force TAO�s radar-task performance improved and the number of errors committed errors decreased as the experience level increased (Newly Qualified Expert).

(3) A simulation scenario was run using the default CIC watch team attribute settings, and only the Force TAO�s fatigue level attribute (Fully Rested, Tired, Exhausted) was changed.The results showed that the Force TAO�s radar task performance times worsened and the number of errors committed errors increased as the fatigue level increased (Fully Rested Exhausted).

(4) A simulation scenario was run using the default CIC watch team attribute settings, and only the Force TAO�s decision-maker type attribute (Cautious, Balanced, Aggressive) was changed.The results showed that the Force TAO�s watchstander radar task performance times decreased and the number of errors committed increased as the decision-maker type increased (Cautious Aggressive).

(5) The results showed that the CIC team performance (Average Classification Times & Number of Errors Committed) improved/decreased as the Force TAO�s skill and experience levels increased and worsened as the fatigue levels increased.

(6) The scenario results showed that the CIC team�s average classification times decreased and the number of errors increased as the Force TAO�s decision-maker type was changed (Cautious Aggressive).

b.�������� Results (See Appendix C Section B for Graphs)

 

F-TAO Questions

Average

Mean

Std. Dev.

1.Skill Modification

6

6

1.12

Contrary Responses

4

4

N/A

2.Experience Modification

5.78

6

0.83

Contrary Responses

5

5

N/A

3.Fatigue Modification

5.56

5

0.73

Contrary Responses

4

4

N/A

4.Decision-maker Type Modification

4.33

4

1.41

Contrary Responses

4.5

4.5

0.58

5.CIC Team Performance wrt F-TAO Performance

5.22

5

1.30

Contrary Responses

4

4

N/A

6.CIC Team Performance wrt F-TAO Decision-maker Type

4.44

4

1.59

Contrary Responses

4.25

4.5

0.96

 

                                                                                                                  Table 47.        Results of F-TAO Questions.

 

c.�������� Analysis and Recommendations

For the skill, experience, and fatigue questions, the average responses were between the agree and strongly agree categories (6, 5.78, and 5.56 respectively) with only one contrary response for each question.However, for each of these contrary responses, the person agreed that such a situation could occur.Starting with the decision-maker type question, there was a moderate level of disagreement where four of the seven experts disagreed with the outcome (4.33 averaged) and had a middle-of-the-road reply to the contrary responses value (4.5 averaged).For the CIC Team Performance (with respect to F-TAO performance) question, the responses were generally in the agree category (5.22 average) with only one contrary response for the question.The CIC Team Performance (with respect to F-TAO Decision-maker type) garnered a similar level of disagreement to the decision-maker question and resulted in a 4.44 (average response) with four experts providing contrary responses (4.22 average), indicating a mixed review of the reasonableness of the occurrence of the situation under a given set of circumstances.

5.�������� CIC Team Questions and Results

a.�������� Questions Posed

(1) Two simulation scenarios were run with two different CIC watch team settings.In the first scenario, the Force TAO was assigned EXPERT ratings in the Skill and Experience attributes, but was assigned an EXHAUSTED rating in the Fatigue attribute.The rest of the CIC team was assigned BASIC/NEWLY QUALIFIED in the Skill and Experience attributes and FULLY RESTED in the Fatigue attributes.For the second scenario, these settings were reversed so that the Force TAO was assigned BASIC/NEWLY QUALIFIED ratings in the Skill and Experience attributes, and was assigned a FULLY RESTED rating in the Fatigue attribute.The rest of the CIC team was assigned EXPERT in the Skill and Experience attributes, but EXHAUSTED in the Fatigue attributes.

The results of the scenarios showed that the Force TAO / CIC Team settings in the second scenario committed fewer contact classification errors and had lower initial contact radar detection times than the team in the first scenario.

(2) During scenario testing, it was noted an expert skilled/experienced CIC watch team that had a high level of fatigue performed better (task performance times, number of errors committed) than a basic skilled/experienced watch team that was fully rested (no fatigue).

b.�������� Results (See Appendix C Section B for Graphs)

 

CIC Team Watchstander Questions

Average

Mean

Std. Dev.

1.F-TAO Expert/Exhausted � CIC Team Basic/Well-Rested

5

5

1.50

Contrary Responses

N/A

N/A

N/A

2.F-TAO Basic/Well-Rested � CIC Team Expert/Exhausted

4.67

5

1.32

Contrary Responses

4

4

N/A

 

                                                                                        Table 48.        Results of CIC Team Watchstander Questions.

 

c.�������� Analysis and Recommendations

The results of question one indicate that most of the respondents agreed that the scenario outcome was reasonable (5.0 average), but there were a couple of disagreements (two ratings of three).Although there were low ratings indicating a disagreement by two respondents, they did not answer the contrary response question.

The results of question two indicate that most of the respondents agreed that the scenario outcome was reasonable (4.67 average), but there were also a couple of disagreements (two ratings of three).There was one contrary response in which the expert agreed that under a certain set of circumstances, the situation could possibly occur.

6.�������� Additional CIC Team Questions and Results

a.�������� Questions Posed

(1) In some scenarios, the number of errors committed and watchstander performance times were better for lower (rating) skill/experience/fatigue/equipment readiness levels than higher ones.Given a certain set of conditions, is this outcome possible?

(2) In some scenarios, the CIC watch team did not identify/classify a nearby, closing contact until it was within a potential weapons-release range (if it was hostile).Given a certain set of conditions, is this outcome possible?

(3) In some scenarios, the CIC watch team misidentified a contact, which then remained incorrect for a significant amount of (simulated) time before being corrected.In some case the contact was never correctly identified.Given a certain set of conditions, is this outcome possible?

(4) In some scenarios, the CIC watch team misidentified a contact as hostile (imminent attack) and launched missiles at the aircraft.Given a certain set of conditions, is this outcome possible?

(5) It was noted that the higher the Scenario Threat Level (White, Yellow, Red), the more likely the CIC watch team was to classify a contact as either Suspect or Hostile.Is this outcome typical of such a situation?

(6) It was noted that the performance of the RSC, EWCO, and IDS watchstanders influenced the classification of a contact (via the Force TAO) more than any other watchstander on the CIC team.Is this outcome typical of such a situation?

b.�������� Results (See Appendix C Section B for Graphs)

 

Additional CIC Team Watchstander Questions

Average

Mean

Std. Dev.

1.Lower Skilled/Experience W/s Performance vs. Higher Skilled/Experienced W/s Performance

4.67

5

1.12

Worthy of Concern wrt. Training Questions

3.5

3.5

0.71

2.Closing Contacts Not Classified by CIC Team until within Weapons Release Range.

5.44

5

1.01

Worthy of Concern wrt. Training Questions

5.33

5

1.53

3.Contacts Misidentified and not corrected for significant amount of time or never.

5.66

5

1.13

Worthy of Concern wrt. Training Questions

5.67

6

0.58

4.Contacts Misidentified as Hostile and missiles launched against them.

5.33

5

1.00

Worthy of Concern wrt. Training Questions

6.33

7

1.15

5.Higher Scenario Threat Level led to greater probability of Suspect/Hostile Classifications.

4.78

5

0.83

Worthy of Concern wrt. Training Questions

4.75

5

1.26

6.Greater Influence of RSC, EWCO, & IDS W/s on Force TAO�s Classifications of Contacts.

5.25

5

0.89

Worthy of Concern wrt. Training Questions

4.67

5

1.53

 

                                                                                        Table 49.        Results of CIC Team Watchstander Questions.

 

c.�������� Analysis and Recommendations

(1)������� Question 1.The results of this question indicate that most respondents agreed that the scenario outcome was reasonable (4.67 average).Two experts disagreed (3.5 training concern response) that it was a topic worthy of concern.
(2)������� Question 2.The results of this question indicate that most respondents agreed that the scenario outcome was reasonable (5.44 average).Three experts disagreed with the outcome, but moderately agreed (5.33 training concern response) that it was a topic worthy of concern.
(3)������� Question 3.The results of this question indicate that most respondents agreed that the scenario outcome was reasonable (5.66 average).Three experts disagreed with the outcome, but moderately agreed (5.67 training concern response) that it was a topic worthy of concern.
(4)������� Question 4.The results of this question indicate that most of the respondents agreed that the scenario outcome was reasonable (5.33 average).Three experts disagreed with the outcome, but strongly agreed (6.33 training concern response) that it was a topic worthy of concern.
(5) Question 5.The results of this question indicate that there was a mixture of agreement and disagreement on the scenario findings.Four experts disagreed with the results although they agreed that it was a topic worthy of concern (4.75 training concern response).Three of the experts agreed with the results producing a 4.78 average.
(6) Question 6.The results of this question indicate that most of the experts agree with the simulation test (5.25 average), but three people disagreed with the results.However, in the training concern question, these people seemed to moderately agree that it was a topic worthy of concern (4.67 contrary average).

vI.���� future work and development of the cruiser adc simulation

A.������� future work introduction

From its inception, the ADC Simulation was conceived as a long-term project that would start with the development and deployment of the initial program as a proof-of-concept and would continue to be expanded in scope and detail.Ideally, a research collaboration among the Naval Postgraduate School Modeling, Virtual Environments, and Simulation (MOVES) Institute and Computer Science Department, Space and Naval Warfare Systems Center (SPAWAR) San Diego, Office of Naval Research (ONR), AEGIS Training & Readiness Center at Dahlgren, Virginia, and the Naval Sea Systems Commander at Dahlgren, was envisioned to be the most beneficial arrangement.The MOVES Institute in conjunction with the Computer Science Department would establish a Battle Group Air-Defense Research Project Laboratory, similar to the MOVES Institute�s Army Game Project, to oversee and coordinate the simulation�s implementation.Masters� and doctoral students at NPS under the direct guidance and supervision of academic professors would conduct the primary research and development of ADC Simulation.The students� military experience coupled with their academic research and requirements would provide a unique opportunity to capitalize on these assets to implement advanced modules.Furthermore, this project work could be accomplished at a substantially reduced cost compared to the same amount of work performed by a civilian contractor company resulting in favorable benefits vs. cost ratio.If this path of research and development were pursued, SPAWAR, ONR, ATRC Dahlgren, and NAVSEA would provide project and system training objectives, additional guidance, technical data, simulation evaluation responsibilities, and funding to support the development and implementation process.

Expansive development of the ADC Simulation could occur in three categories that would result in a possible line of products to support air-defense training and planning.The first category of future work would involve the continued expansion of the ADC Simulation in scope and detail.The scope factor would relate to the increase of the simulation�s focus so that a wider area of relevant operations could be simulated.The detail factor would consist of improvements to the fidelity of the simulation to replicate various aspects of battle group air-defense more accurately.The second category of future work would pursue the extension of the ADC Simulation as the centerpiece of an interactive air-defense training simulation system for use by watchstanders to assist in gaining and maintaining watchstation proficiency.The last category of future work would involve the adapting of the ADC Simulation for use in replicating similar types of operations conducted the United States Army and United States Air Force.

B.������� future work to expand the scope and detail of the adc simulation

1.�������� Implement Networked Simulation of Battle Group Air-Defense Operations

The next logical step in the extension of the ADC project is to develop a networked simulation of naval ships performing battle group air defense to more accurately reproduce realistic performance and events from such operations.In the original ADC Simulation, additional ships beyond the ADC cruiser and aircraft carrier were also simulated in the program, but were abstracted to only provide the necessary performance requirements (i.e. transmission of LINK 11/16 contacts, surface-to-air missile engagements, communications reports to the cruiser).To gain a greater level of fidelity in the ADC Simulation, these other ships must be implemented in a similar manner as the ADC cruiser, which would require they be developed using multi-agent systems architecture.However, implementing these additional agent-based CIC teams/ships into the existing program platform would be problematic for the following two reasons.First, the current platform�s resources are already near maximum usage (due to the speed and memory limitations of the computer) and any additional large-scale additions would likely strain the system to the point that its performance would be negatively affected.Second, and perhaps more importantly, the performance of these additional ships and teams would not be accurately represented because their actions which normally occur in parallel in the real world, would be executed serially despite the use of Java language-based parallel processing components (multi-threading) because of inherent PC hardware designs.

These issues can be resolved by implementing each of the CIC teams/ships on their own individual processor and then networking them together to more accurately simulate battle group air-defense operations.Using the Extensible Markup Language (XML) and other computer networking components, the ships could be connected together to realistically simulate the voice and LINK 11/16 messages which represent a significant majority of all of the communications occurring within a battle group.A diagram of the proposed configuration is listed in Figure 39.

 

                                                                               Figure 39.                   Battle Group Simulation of Air-Defense Operations.

 

There are several advantages to expanding the ADC Simulation in such a manner to replicate battle group operations.First, the use of independent platforms to host the CIC teams/ships would eliminate the serial-performance problem associated with attempting to implement them on the same platform and would promote a significantly greater level of parallelism that is inherent in actual operations.Second, if appropriately designed to capture a level of performance similar to actual operations, this configuration could provide exceptional insight into the understanding of battle group air-defense performance under a wide variety of situations.If all of the platforms would be implemented using the same architecture as the ADC Simulation, a considerable level of accuracy and fidelity could be achieved as a multitude of watchstander, ship system, and environmental attributes could be specified for each CIC team and ship, especially in the areas of LINK 11/16 performance and air-defense decision-making and aircraft classification.Lastly, a battle group ADC Simulation could provide battle group and fleet command staffs, battle group training organizations, and doctrine/war-gaming organizations with very accurate and realistic simulation architecture in supporting their mission objectives.

2.�������� Implement a More Detailed Watchstander Fatigue/Vigilance Model

Another potential and useful development project for the ADC Simulation would be to implement a more detailed and realistic fatigue performance model for the watchstanders.In the current system, the user presets the fatigue level before and/or during the simulation run, but this setup does not accurately represent the decline of watchstander performance over extended periods of time or any of the other factors that influence them.Major Joerg Wellbrink, Federal German Army, and Dr. Rudolph Darken at the MOVES INSTITUTE of the Naval Postgraduate School, are exploring the research and development of fatigue and vigilance models for inclusion in watchstander performance.They assert,

It is possible to realistically simulate individual human performance to generate surprises, unintended consequences and potentially dangerous outcomes.A new cognitive model, based on complex adaptive systems (CAS) theory, enabled us to explore imperfect human behavior, thereby meeting a necessary precondition to a new kind of threat-analysis simulation models.[57]

The integration of this model into the ADC Simulation�s watchstander agents could greatly enhance the realism of the agents� behaviors to a level much closer to expected human performance thereby increasing the overall fidelity and usefulness of the program.

3.�������� Implement Aircraft Contacts as Watchstander Agents

The aircraft contact objects used in the simulation were designed with a substantial level of detail with regard to their performance and actions that produce a robust and dynamic set of behaviors.This rich variety of potential actions ensures the watchstander agents are not presented with static aircraft profiles which would otherwise reduce the quality and uniqueness of the ADC Simulation.The next step in the development of the aircraft contacts would be to implement them as agents.Transforming the contacts into agents would instill into these software components a set of objectives and intents that would drive them towards behaviors more closely related to those of human aircraft pilots.This interesting capability would allow for the introduction of deception as part of the behavior set of the aircraft agents and could produce very interesting results when deployed against the CIC team.

4.�������� Implement a More Detailed Log Parser Using XML

Although the current program contains a simulation log parser to find and display specific data, this implementation was basic in its design and can be improved to increase its usefulness and relevance.In the most recent version, the ADC Simulation log parser allows the user to search for watchstander, scenario, and equipment log data by inputting an aircraft contact�s track number.Using XML technology to designate data within the log entries more effectively, the simulation�s data recording and displaying components could be redesigned to allow for parsing of these logs with a greater level of granularity.This implementation could allow for an increased flexibility in the display of the recorded data including the parsing of specific logs (i.e. RSC Decision History Log only), parsing between designated periods of time, comparison of different watchstander log records on the same aircraft contact, and any combination of the above to retrieve data.

5.�������� Implement a More Detailed Capability for AEGIS and Air-Defense Doctrine

In the current version of the ADC Simulation, the only user-modifiable doctrine available is the capability to alter the Auto-special doctrine under the AEGIS Doctrine menu.This option was included as a proof-of-concept to demonstrate the usefulness of capabilities for modifying various types of doctrines to more realistically replicate all of the tools available to the CIC team.Other relevant doctrine that could be implemented is Identification Doctrine, IFF Doctrine, and Auto-Standard Missile Doctrine.These doctrines, frequently used by CIC teams to alleviate some of the time constraints inherent in air-defense operations, would greatly enhance the performance of the simulation.

6.�������� Implement Alternate Scenario Locations

The ADC Simulation location is set in the Arabian Gulf region, a place that has been (and remains) a particularly dangerous and demanding environment to operate in for aircraft carrier battle groups.However, there are many other locations of military interest to the United States Navy such as the Korean Peninsula, the Taiwan Straits, and the Adriatic Sea that are also relevant to current and future naval operations.Implementing the capability to alter the scenario locations would provide an increased flexibility and applicability to the simulation program.

7.�������� Implement More Detailed Treatment of SPY-1B Radar System, SLQ-32 System, and Communications System

During the development process, the performances and characteristics of the SPY-1B Radar System, SLQ-32 System, and External Communication System were abstracted so as to provide realistic qualitative behavior without requiring a substantial amount of coding to replicate them (which was not within the scope of the research).A byproduct of this project would be the capability to accurately simulate the effects of weather and geography (land formations) on the performance of the systems.Implementing these systems with a finer level of detail that addresses the underlying and fundamental characteristics and operations of the combat systems equipment would increase the overall level of realism and accuracy of the simulation.For certain systems, this extension would require the need to delve into classified material to properly implement the improvements.

8.�������� Conduct a More In-Depth Study of Metrics for Watchstander Performance Attributes (OR/OA)

Since an accurate and vetted data resource on many of the simulation�s modifiable attributes was not available during its design and development, the watchstander (skill, experience, fatigue, decision-maker types), equipment readiness levels, and external scenario attributes were adopted from extensive interviews with air-defense training experts.These values specifically refer to the probability of success values and watchstander maximum task times associated with the above attributes.Although the capability to modify these values was incorporated into the simulation if the user disagrees with the default values, the ADC Simulation would benefit from a rigorous, in-depth study of the watchstander performance metrics for each attribute.Such a study would be of significant value to the Navy in general as well as very advantageous to increasing the realistic performance of the watchstander agents.Additionally, as part of this study, a more in-depth statistical analysis of the ADC Simulation could be conducted to provide a greater level of understanding of the program�s performance characteristics, which could then be measured against the performance of an actual CIC team conducting air-defense operations.This study could be of great value if the investigator was able to replicate the scenarios performed by the actual CIC team within the simulation scenarios so that a direct comparison could be made and an assessment of the ADC Simulation�s level of reality determined.A study of this magnitude and scope would be well suited for a student in the Operations Analysis/Research curriculum.

9.�������� Implement the Capability to Replay Previous Scenarios and/or Portions of Those Scenarios

The current version of the ADC Simulation does not have the capability to rerun previously conducted scenarios.To increase the training usefulness of the simulation, a capability could be added to the program to allow the user to replay either an entire scenario run or designated portions of it for review and training purposes.

10.������ Implement the Capability to Build Scenarios with Specified Contact Aircraft of Various Types and Behaviors

A useful feature that could be incorporated into the ADC Simulation would be the capability to create specific scenarios (instead of allowing the program to generate random events as it does now) containing contacts with desired aircraft behaviors and attributes.This would provide the user with a greater level of flexibility in the potential uses of the simulation, especially, if he or she only prefers to test only a certain set of scenarios.

c.������� future work TO adapt the adc simulation for advanced training of watchstanders

1.�������� First Phase Single Watchstander Training System

The second major project for the continued development of the ADC Simulation would be to incorporate a capability for the user to play a specific watchstander in the simulation for training purposes.Such a program would display a screen equivalent to the specified watchstander�s actual CIC console display and would prompt the user to participate in the simulated CIC team�s duties.The ADC Simulation program would run in the background and in addition to handling the display for the watchstander, it would simulate the other watchstander agents� actions and reports (voice and text) as well as coordinate the overall scenario events.

                                                       Figure 40.                   Live Watchstanders Participating in Air-Defense Training Simulation.

 

2.�������� Second Phase Multi-Watchstander, Interlinked Training System

Once the ADC Training Simulation mentioned above was completed, the next logical step in its development and deployment would be to implement the capability for multiple human watchstanders to train with the simulation program at the same time as part of a coordinated CIC watch team (see Figure 39).Each watchstander would interact with a computing platform system that would display the CIC console associated with that particular watchstation while a central Multi-Watchstander ADC Training Simulation server program would autonomously (as configured by the training supervisor) control the scenario.All of the watchstanders would be connected via headsets for communication and the simulation program would use voice cues to replicate voice reports from other ships.Also, the simulation server would assume the role of watchstanders if a full team were not available. Similar to the Battle Force Tactical Trainer (BFTT) system discussed in Chapter Two, this training program would have a unique difference in that the simulation would be run by a system using a multi-agent systems architecture that would generate dynamic, rich, and demanding events to challenge the watchstanders.Additionally, the Multi-Watchstander ADC Training Simulation would provide the training supervisors with an easier and much faster way to generate scenarios for CIC team training than currently exists.


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

THIS PAGE INTENTIONALLY LEFT BLANK

vii.�� summary AND conclusion

The Air Defense Commander (ADC) Simulation is a top-view, dynamic, and graphics-driven software implementation of an AEGIS Cruiser Combat Information Center (CIC) team performing the Battle Group Air Defense Commander duties in the Arabian Gulf region.The program simulates the mental processes, decision-making aspects, cognitive attributes, and communications of an eleven-member CIC air defense team performing their duties under stressful conditions facilitated by the requirement to maintain an overall situational awareness of the battle group�s airspace.The ADC Simulation was developed to assist air-defense trainers in gaining understanding and insight into the degree to which CIC watchstander skill, experience, fatigue, type of decision-maker, and various other environmental attributes influence the performance of the individual as well as the collective watch team.Developed in the Java language, the CIC watchstanders were implemented using multi-agent systems (MAS) technology, which provided a robust architecture to closely simulate the human participants.The program offers a significant level of flexibility and options for the user including the capabilities to configure almost every attribute variable in the simulation (watchstander attribute levels, environment settings, etc.), record all of the events within a scenario for later review and analysis, examine performance metrics of the simulated watchstanders, and interact and modify ongoing scenarios.

The development of the ADC Simulation has demonstrated the potential usefulness of such programs to the United States Navy and other services in several ways.First, air-defense operations were successfully modeled and implemented into a software system that reasonably represented this vital warfare area while offering a unique, helpful, and new approach to evaluating such processes.

Second, the successful production of the ADC Simulation serves as a proof-of-concept to the capabilities and usefulness of these programs in service to the military to assist in training and planning.Because it was developed using multi-agent system architecture, the ADC Simulation has the potential to be easily modified for use in modeling other types of similar warfare operations performed by other naval communities and military services.

Lastly, it showed that the knowledge and capability to design and develop such systems (as well as more sophisticated ones) resides at the Naval Postgraduate School Computer Science and MOVES Institute among the faculty and students.The academic expertise combined with the military knowledge and initiative of the military officer/students provides the United States Navy and the Department of Defense with an extraordinary wealth of knowledge and talent to use in developing and implementing software training and support systems for the military.

appendix A.ucd process phase three data

A.������� conceptual design sketches

                                                                                  Figure 41.                   Early Menu Design Sketches for ADC Simulation.

 

 

                                                                                  Figure 42.                   Early Menu Design Sketches for ADC Simulation.

 

                                                                                  Figure 43.                   Early Menu Design Sketches for ADC Simulation.

 

 

                                                                                  Figure 44.                   Early Menu Design Sketches for ADC Simulation.

 

                                                                                  Figure 45.                   Early Menu Design Sketches for ADC Simulation.


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

THIS PAGE INTENTIONALLY LEFT BLANK

Appendix B.ucd process phase five data

A.������� Analysis of Task Data

Listed below is a summary breakdown of the key data collected from the five subjects� evaluations of each task they were requested to perform.For each task, either the primary and secondary measurement values or two primary measurement values are provided.In the case of the former set of measurement values, the primary value has the best case, worst case, and target level for the measurements included with the average value for that measurement.For the latter set of measurement values, they both include the best cases, worst cases, and target levels for those measurements.Following each summary table, are comment blocks for noteworthy errors and memorability/learnability issues that were encountered during the evaluations.The best case, worst case, and target levels for number of errors and times to complete tasks were determined during the initial development of the task list.

There are two sets of graphs associated with each task data table and are displayed in Section B.The first set contains the averaged and individual responder task-times graphs, and the second set includes the averaged and individual responder number of errors committed per task graphs.

Task #1:Open Scenario Menu

 

Usability Attribute:Initial Performance

Number of mouse clicks to complete:1 click

 

Primary Measurement Value:Number of Errors

Best Case

0 Errors

Target Level

1 Error

Worst Case

2 Errors

Average Number of Errors

1.2 Errors

 

Secondary Measurement Value:Time to Complete Task

Average Time to Complete Task

16.6 Seconds

 

Error Comments:No noteworthy errors.

Memorability/Learnability Issues:N/A

 

 


Task #2:Open Watchstander Attributes Menu

 

Usability Attribute:Initial Performance

Number of mouse clicks to complete task:1 click

 

Primary Measurement Value:Time to Complete Task

Best Case

2 Seconds

Target Level

5 Seconds

Worst Case

10 Seconds

Average Time to Complete Task

4.2 Seconds

 

Secondary Measurement Value:Number of Errors

Average Number of Errors

0.2 Errors

 

Error Comments:No noteworthy errors.

Memorability/Learnability Issues:It was noted that the average number of errors for this task dropped appreciably after the completion of Task #1.This decrease in errors is believed to be from the subjects� increase in familiarity of the main menu bar items.There was a corresponding decrease in the average time it took the subjects to perform the task, also.

 

 

Task #3: Open CIC Equipment Setup Menu

 

Usability Attribute:Initial Performance

Number of mouse clicks to complete:1 click

 

Primary Measurement Value:Time to Complete Task

Best Case

2 Seconds

Target Level

5 Seconds

Worst Case

10 Seconds

Average

2.8 Seconds

 

Secondary Measurement Value:Time to Complete Task

Average

0 Errors

 

Error Comments:No noteworthy errors.

Memorability/Learnability Issues:It was noted that the average number of errors for this task dropped appreciably after the completion of Task #1.This decrease in errors is believed to be from the subjects� increase in familiarity of the main menu bar items.There was a corresponding decrease in the average time it took the subjects to perform the task, also.

 

 

 

 

Task #4: Open Scenario Doctrine Setup Menu

 

Usability Attribute:Initial Performance

Number of mouse clicks to complete:1 click

 

Primary Measurement Value:Number of Errors

Best Case

0 Errors

Target Level

1 Error

Worst Case

2 Errors

Average

0.4 Errors

 

 

Secondary Measurement Value:Time to Complete Task

Average

3.2 Seconds

 

Error Comments:No noteworthy errors.

Memorability/Learnability Issues:It was noted that the average number of errors for this task dropped appreciably after the completion of Task #1.This decrease in errors is believed to be from the subjects� increase in familiarity of the main menu bar items.There was a corresponding decrease in the average time it took the subjects to perform the task, also.

 

 

Task #5: Open Scenario External Attributes Menu

 

Usability Attribute:Initial Performance

Number of mouse clicks to complete:1 click

 

Primary Measurement Value:Time to Complete Task

Best Case

10 Seconds

Target Level

5 Seconds

Worst Case

2 Seconds

Average

2.6 Seconds

 

Secondary Measurement Value:Number of Errors

Average

0.2 Errors

 

Error Comments:No noteworthy errors.

Memorability / Learnability Issues:It was noted that the average number of errors for this task dropped appreciably after the completion of Task #1.This decrease in errors is believed to be from the subjects� increase in familiarity of the main menu bar items.There was a corresponding decrease in the average time it took the subjects to perform the task, also.

 

 

 

Task #6:Open Simulation Logs Menu

 

Usability Attribute:Initial Performance

Number of mouse clicks to complete:1 click

 

Primary Measurement Value:Number of Errors

Best Case

0 Errors

Target Level

1 Error

Worst Case

2 Errors

Average

0 Errors

 

Secondary Measurement Value:Time to Complete Task

Average

2.4 Seconds

 

Error Comments:No noteworthy errors.

Memorability/Learnability Issues:It was noted that the average number of errors for this task dropped appreciably after the completion of Task #1.This decrease in errors is believed to be from the subjects� increase in familiarity of the main menu bar items.There was a corresponding decrease in the average time it took the subjects to perform the task, also.

 

 

Task #7: Change the Maximum time it takes a Watchstander to Complete a Task

 

Usability Attribute:Initial Performance

Number of mouse clicks to complete:3 clicks

 

Primary Measurement Value #1:Number of Errors

Best Case

0 Errors

Target Level

1 Error

Worst Case

3 Errors

Average

0.6 Errors

 

Primary Measurement Value #2:Time to Complete Task

Best Case

5 Seconds

Target Level

20 Seconds

Worst Case

30 Seconds

Average

8.4 Seconds

 

Error Comments:The only noteworthy error was that several subjects committed was to first search the �Watchstander Attributes� Menu before realizing the �Watchstander Tasks & Skills� Menu was where the task could be correctly completed.This was due to the initial confusion of the purposes and contents for each of the menus.

Memorability/Learnability Issues:The overwhelming cause of the errors according to the subjects was due to the similarity in names for both of the menus.After the evaluation, several of the subjects made recommendations to eliminate the confusion caused by the menus� names.These recommendations can be found in Chapter III, Section F-7 - Recommendations .

 

 

Task #8: Select a Contact to Display Data in the Contact Data Display Window

 

Usability Attribute:Initial Performance

Number of mouse clicks to complete:1 click

 

Primary Measurement Value:Time to Complete Task

Best Case

5 Seconds

Target Level

15 Seconds

Worst Case

30 Seconds

Average

4 Seconds

 

Secondary Measurement Value:Number of Errors

Average

0 Errors

 

Error Comments:No noteworthy errors.

Memorability/Learnability Issues:N/A

 

 

Task #9: Select the F-TAO Watchstander to Display Data in the Agent Attributes Window

 

Usability Attribute:Initial Performance

Number of mouse clicks to complete:1 click

 

Primary Measurement Value:Number of Errors

Best Case

0 Errors

Target Level

1 Error

Worst Case

4 Errors

Average

1.2 Errors

 

Secondary Measurement Value:Time to Complete Task

Average

12.6 Seconds

 

Error Comments:There was a slight rise in the average number of errors between this task and Task #8, which was similar in nature.There was a more significant increase in the average time to complete the task measurement.The increase in time was considered to be due to the subjects searching the �Watchstander Attributes� menu initially instead of immediately selecting the F-TAO icon in the CIC Agent Display.

Memorability/Learnability Issues:It was expected that there would be an improvement in both measurement values for this task because it was similar in nature to the performance of Task #8, but for the reason sited above, this did not occur.

 

Task #10: Open a Contact�s Pop-up Options Window

 

Usability Attribute:Initial Performance

Number of mouse clicks to complete: 1 click

 

Primary Measurement Value:Number of Errors

Best Case

0 Errors

Target Level

2 Errors

Worst Case

5 Errors

Average

4 Errors

 

Secondary Measurement Value:Time to Complete Task

Average

49.6 Seconds

 

Error Comments:Two of the subjects produced a significant number of errors during the performance of this task, which affected the length of take time required for them to complete the task.The overwhelming cause of the errors for those subjects resulted from their lack of recognition that a �right mouse click� on the contact would make the pop-up options window appear (just like a file icon in windows).Because of this situation, the subjects searched through several menus to no avail until they returned to the icon and discovered the correct action to perform.

Memorability/Learnability Issues:The initial goal for this task was to show that the subjects would translate their familiarity with M.S. Windows� operations to the performance of tasks in the ADC Simulation.This familiarity would allow for a smooth transition to usage of the program with little difficulties for the subjects.Although some of the subjects realized the similarity between Windows and the Simulation and consequently, performed adequately, the other two subjects had significant trouble.

 

 

Task #11: Open F-TAO Pop-up Options Window

 

Usability Attribute:Initial Performance

Number of mouse clicks to complete:1 click

 

Primary Measurement Value:Time to Complete Task

Best Case

5 Seconds

Target Level

10 Seconds

Worst Case

30 Seconds

Average

6.4 Seconds

 

Secondary Measurement Value:Number of Errors

Average

1.2 Errors

 

Error Comments:For this task, only one of the subjects had difficulty with its completion.The other four subjects all performed the task with zero errors.

 

Memorability/Learnability Issues:Due to the similarity of this task to Task #10, it was expected that task completion times and the number of errors committed would decrease significantly.With the exception of the one subject, this expected performance materialized, and there was a substantial decrease in errors and task completion times.With respect to the subject who encountered some difficulties, this subject�s performance increased adequately in comparison to his performance on Task #10.

 

 

Task #12: Increase the Time Compression of the Simulation

 

Usability Attribute:Initial Performance

Number of mouse clicks to complete:1 click

 

Primary Measurement Value #1:Number of Errors

Best Case

0 Errors

Target Level

1 Error

Worst Case

2 Errors

Average

1.8 Errors

 

Primary Measurement Value #2:Time to Complete Task

Best Case

5 Seconds

Target Level

15 Seconds

Worst Case

30 Seconds

Average

27.4 Seconds

 

Error Comments:For this task, only one of the subjects had difficulty with its completion.The other four subjects all performed the task with zero errors and significantly lower task completion time (three achieved two seconds, one completed in eleven seconds).The subject who encountered difficulty did not realize that the �Increase Time Compression� button was located on the GUI in the �Shortcut Control Button Display� and instead searched through the menus.

Memorability/Learnability Issues:N/A

 

Task #13: Pause the Simulation

 

Usability Attribute:Initial Performance

Number of mouse clicks to complete:1 click

 

Primary Measurement Value #1:Number of Errors

Best Case

0 Errors

Target Level

1 Error

Worst Case

2 Errors

Average

0 Errors

 

 

 

 

Primary Measurement Value #2:Time to Complete Task

Best Case

5 Seconds

Target Level

15 Seconds

Worst Case

30 Seconds

Average

2.8 Seconds

Error Comments:No noteworthy errors

Memorability/Learnability Issues:After the performance of Task #12, it was expected that the subjects� performance on this task should improve significantly due to the familiarity with the �Shortcut Control Button Display.�This improvement did occur.

 

 

Task #14: Pause the Simulation (2nd Way)

 

Usability Attribute:Initial Performance

Number of mouse clicks to complete:2 clicks

 

Primary Measurement Value #1:Number of Errors

Best Case

0 Errors

Target Level

1 Error

Worst Case

2 Errors

Average

1 Error

 

Primary Measurement Value #2:Time to Complete Task

Best Case

5 Seconds

Target Level

15 Seconds

Worst Case

30 Seconds

Average

11.6 Seconds

 

Error Comments:There was an expected increase in the number of errors and the average task completion time because the subjects had to determine where to find the second place where the simulation could be paused.

Memorability/Learnability Issues:After the evaluation, most of the subjects stated that they searched the �File� menu first because it seemed to be the logical place to find it.(This was a correct deduction).

 

 

Task #15: Set the Situation Assessment Skill Level to Expert for the Force TAO

 

Usability Attribute:Learnability

Number of mouse clicks to complete:4 clicks

 

Primary Measurement Value:Time to Complete Task

Best Case

5 Seconds

Target Level

15 Seconds

Worst Case

30 Seconds

Average

49.2 Seconds

 

Secondary Measurement Value:Number of Errors

Average

3 Errors

 

Error Comments:The largest number of errors during the completion of the task occurred for two reasons.The first reason was that some subjects initially opened the �Watchstander Skills & Attributes� menu to complete the task.The second reason for the high task completion times and average number of errors was because some of the subjects tried to select the F-TAO icon in the CIC Agent Display and use the pop-up options window to complete the task.

Memorability/Learnability Issues:During the subjects� evaluations, it was noted that a significant number of them attempted to complete the task by opening up the pop-up options window on the F-TAO icon.This indicated that the subjects expected to be able to manipulate all of the watchstander attributes found in the �Watchstander Attributes� menu via the �CIC Agent Display� icons, also.Several subjects communicated these recommendations, which are discussed in Chapter III, Section F-7 - Recommendations.

 

 

Task #16: Set the Fatigue Level to Exhausted for the RSC

 

Usability Attribute:Learnability

Number of mouse clicks to complete:4 clicks

 

Primary Measurement Value:Number of Errors

Best Case

0 Errors

Target Level

1 Error

Worst Case

2 Errors

Average

0.4 Errors

 

Secondary Measurement Value:Time to Complete Task

Average

10.6 Seconds

 

Error Comments:No noteworthy errors.

Memorability/Learnability Issues:It was expected that the average number of errors and the average task completion times should decrease significantly due to the similarity of this task to Task #15.These expectations were realized.

 

 

Task #17: Set the SPY-1B Radar Equipment Readiness Level to Non-Operational

 

Usability Attribute:Learnability

Number of mouse clicks to complete:4 clicks

 

 

 

Primary Measurement Value:Number of Errors

Best Case

0 Errors

Target Level

1 Error

Worst Case

2 Errors

Average

0 Errors

 

Secondary Measurement Value:Time to Complete Task

Average

7.4 Seconds

 

Error Comments:No noteworthy errors.

Memorability/Learnability Issues:It was expected that the average number of errors and the average task completion times should decrease significantly due to the similarity of this task to Task #15.These expectations were realized.

 

 

Task #18: Set the ADC Doctrine Query Range to 30 NM & Warning Ranges to 20NM

 

Usability Attribute:Learnability

Number of mouse clicks to complete:2 clicks

 

Primary Measurement Value:Time to Complete Task

Best Case

5 Seconds

Target Level

15 Seconds

Worst Case

30 Seconds

Average

4.4 Seconds

 

Secondary Measurement Value:Number of Errors

Average

0 Errors

 

Error Comments:No noteworthy errors.

Memorability/Learnability Issues:It was expected that the average number of errors and the average task completion times should decrease significantly due to the similarity of this task to Task #15.These expectations were realized.

 

 

Task #19: Set the Scenario Threat Level to Red

 

Usability Attribute:Learnability

Number of mouse clicks to complete:3 clicks

 

Primary Measurement Value:Time to Complete Task

Best Case

5 Seconds

Target Level

15 Seconds

Worst Case

30 Seconds

Average

10.6 Seconds

 

Secondary Measurement Value:Number of Errors

Average

0 Errors

 

Error Comments:No noteworthy errors.

Memorability/Learnability Issues:It was expected that the average number of errors and the average task completion times should decrease significantly due to the similarity of this task to Task #15.These expectations were realized.

 

 

Task #20: Open the Scenario Event Log

 

Usability Attribute:Learnability

Number of mouse clicks to complete:2 clicks

 

Primary Measurement Value:Number of Errors

Best Case

0 Errors

Target Level

1 Error

Worst Case

2 Errors

Average

0 Errors

 

Secondary Measurement Value:Time to Complete Task

Average

5 Seconds

 

Error Comments:No noteworthy errors.

Memorability/Learnability Issues:N/A

 

 

Task #21: Open the SLQ-32 System Status Log

 

Usability Attribute:Learnability

Number of mouse clicks to complete:3 clicks

 

Primary Measurement Value:Time to Complete Task

Best Case

5 Seconds

Target Level

15 Seconds

Worst Case

30 Seconds

Average

16.4 Seconds

 

Secondary Measurement Value:Number of Errors

Average

1.8 Errors

 

Error Comments:There was an unexpected rise in the average number of errors and task completion times, which was attributed to some confusion among three of subjects during the performance of the task.Upon determining the task involved the SLQ-32 (piece of CIC equipment), these subjects first searched the �CIC Equipment Setup� menu for the appropriate menu item instead of the �Simulation Logs� menu.

Memorability/Learnability Issues:Although two of the subjects realized that the �Simulation Logs� menu was the appropriate menu (since it contained the term �Logs� in the name), the other subjects could not capitalize on this familiarity due their association of the type of log (SLQ-32) with the �CIC Equipment Setup� menu.

 

 

Task #22: Set the Performance Probabilities Watchstander Fatigue Levels to 0.5, 0.7, 0.9 (L to R)

 

Usability Attribute:Learnability

Number of mouse clicks to complete:3 clicks

 

Primary Measurement Value #1:Number of Errors

Best Case

0 Errors

Target Level

1 Error

Worst Case

2 Errors

Average

0.8 Errors

 

Primary Measurement Value #2:Time to Complete Task

Best Case

5 Seconds

Target Level

15 Seconds

Worst Case

30 Seconds

Average

13.6 Seconds

 

Error Comments:No noteworthy errors.

Memorability/Learnability Issues:It was expected that the average number of errors and the average task completion times should decrease significantly due to the similarity of this task to Task #7 and the familiarity with the menu items located in the �Watchstander Tasks & Skills� menu.These expectations were realized.

 

 

Task #23: Change the Maximum Time for the F-TAO Watchstander to Complete a Task

 

Usability Attribute:Learnability

Number of mouse clicks to complete:2 clicks

 

Primary Measurement Value #1:Number of Errors

Best Case

0 Errors

Target Level

1 Error

Worst Case

2 Errors

Average

0.6 Errors

 

Primary Measurement Value #2:Time to Complete Task

Best Case

5 Seconds

Target Level

15 Seconds

Worst Case

30 Seconds

Average

14.4 Seconds

 

Error Comments:No noteworthy errors.

Memorability/Learnability Issues:It was expected that the average number of errors and the average task completion times should decrease significantly due to the similarity of these tasks to Tasks #7 & #22 and the familiarity with the menu items located in the �Watchstander Tasks & Skills� menu.These expectations were realized.

 

 

Task #24: Change the Speed of the Hostile Air Contact to 500 KTS

 

Usability Attribute:Learnability

Number of mouse clicks to complete:1 click

 

Primary Measurement Value #1:Number of Errors

Best Case

0 Errors

Target Level

1 Error

Worst Case

2 Errors

Average

2.4 Errors

 

Primary Measurement Value #2:Time to Complete Task

Best Case

5 Seconds

Target Level

15 Seconds

Worst Case

30 Seconds

Average

29 Seconds

 

Error Comments:It was expected that the average number of errors and average task completion time would decrease due to the similarity of this task to Task #10 and the familiarity of the contact pop-up options menu items, but this was not the case.Several of the subjects initially searched the menus or attempted to select the contact (left mouse click) to modify the attributes displayed in the �Contact Data Display.�

Memorability/Learnability Issues:After Task #10, some of the subjects still did not realize they could modify the attributes of the hostile contact via its pop-up options window.

 

 

Task #25: Change the F-AAWC Experience Attribute to Expert

 

Usability Attribute:Learnability

Number of mouse clicks to complete:4 clicks

 

Primary Measurement Value #1:Number of Errors

Best Case

0 Errors

Target Level

1 Error

Worst Case

2 Errors

Average

1.2 Errors

 

 

Primary Measurement Value #2:Time to Complete Task

Best Case

5 Seconds

Target Level

15 Seconds

Worst Case

30 Seconds

Average

14.4 Seconds

 

Error Comments:The largest cause of errors was due to several of the subjects initially searching the �Watchstander Tasks & Skills� menu instead of the �Watchstander Attributes� menu.

Memorability/Learnability Issues:It was expected that the average number of errors and the average task completion times should decrease significantly due to the similarity of this task to Tasks #15 & #16 and the familiarity with the menu items located in the �Watchstander Attributes� menu.These expectations were generally realized.

 

 

Task #26: Change the Link Equipment Status to Partially Degraded

 

Usability Attribute:Learnability

Number of mouse clicks to complete:4 clicks

 

Primary Measurement Value #1:Number of Errors

Best Case

0 Errors

Target Level

1 Error

Worst Case

2 Errors

Average

0.2 Errors

 

Primary Measurement Value #2:Time to Complete Task

Best Case

5 Seconds

Target Level

15 Seconds

Worst Case

30 Seconds

Average

6.4 Seconds

 

Error Comments:No noteworthy errors.

Memorability/Learnability Issues:It was expected that the average number of errors and the average task completion times should decrease significantly due to the similarity of this task to Task #17 and the familiarity with the menu items located in the �CIC Equipment Setup� menu.These expectations were generally realized.

 

 

 

 

 

 

 

 

 

 

B.������� Simulation Evaluations

1.�������� Evaluation Charts (Number of Errors and Task Completion Times)

                                                                                                     Figure 46.                   Average Number of Errors per Task.

 

 

                                                                                                      Figure 47.                   Errors During Performance of Tasks.

                                                                                                     Figure 48.                   Average Number of Errors per Task.

 

 

                                                                                                      Figure 49.                   Errors During Performance of Tasks.

 

                                                                                                     Figure 50.                   Average Number of Errors per Task.

 

 

                                                                                                       Table 50.        Errors During Performance of Tasks.

 

                                                                                            Figure 51.                   Average Number of Performance of Tasks.

 

 

                                                                                                      Figure 52.                   Errors During Performance of Tasks.


                                                                                                            Figure 53.                   Average Task Completion Time.

 

 

                                                                                                               Figure 54.                   Total Time to Complete Tasks.

                                                                                                               Figure 55.                   Average Task Complete Time.

 

 

 

                                                                                                               Figure 56.                   Total Time to Complete Tasks.

                                                                                                            Figure 57.                   Average Task Completion Time.

 

 

 

                                                                                                               Figure 58.                   Total Time to Complete Tasks.

 

                                                                                                            Figure 59.                   Average Task Completion Time.

 

 

 

                                                                                                               Figure 60.                   Total Time to Complete Tasks.

 

 


C.������� Simulation Evaluation Surveys

1.�������� Evaluation Survey Charts (Average and Raw Data)

 

                                                                                                            Figure 61.                   Screen Layout Survey Averages.

 

 

                                                                                                                                       Figure 62.                   Survey Scores.

                                                                                               Figure 63.                   Overall Display Layout Survey Averages.

 

                                                                                                                                       Figure 64.                   Survey Scores.

 

 

                                                                                      Figure 65.                   Menu Location and Wording Survey Averages.

 

                                                                                                                                       Figure 66.                   Survey Scores.

 

                                                                                                        Figure 67.                   Task Completion Survey Averages.

 

Return to Subject Evaluation Surveys:Task Completion

 

                                                                                                                                       Figure 68.                   Survey Scores.

 

Appendix C.simulation evaluation results AND Air-Defense expert survey results

A.������� adc simulation evaluation results

1.�������� Evaluation Results for the RSC Watchstander Agent

 

                                                                              Figure 69.                   RSC Averaged Times-Radar Operations Skill Level.

                                                                            Figure 70.                   RSC Averaged Errors-Radar Operations Skill Levels.

 

                           Figure 71.                   RSC Averaged Number Attempted CIC Classifications, Radar Operations Skill Level.

                                                                                                Figure 72.                   RSC Averaged Times-Experience Level.

 

                                             Figure 73.                   RSC Averaged Number Attempted CIC Classifications-Experience Level.

                                                                                                      Figure 74.                   RSC Averaged Times-Fatigue Level.

                                                                                                    Figure 75.                   RSC Averaged Errors-Fatigue Levels.

                                                   Figure 76.                   RSC Averaged Number Attempted CIC Classifications-Fatigue Level.

 

                                                                         Figure 77.                   RSC Averaged Times-SPY-1B Radar Readiness Level.

 

                                                                       Figure 78.                   RSC Averaged Errors-SPY-1B Radar Readiness Levels.

 

                      Figure 79.                   RSC Averaged Number Attempted CIC Classifications-SPY-1B Radar Readiness Level.


2.�������� Evaluation Results for the EWCO Watchstander Agent

                                                                                            Figure 80.                   EWCO Averaged Times-ES Analysis Skill.

 

                                                                                            Figure 81.                   EWCO Averaged Errors-ES Analysis Skill.

                                          Figure 82.                   EWCO Averaged Number Attempted CIC Classifications-ES Analysis Skill.

 

                                                                                            Figure 83.                   EWCO Averaged Times-Experience Level.


                                                                                           Figure 84.                   EWCO Averaged Errors-Experience Level.

 

                                         Figure 85.                   EWCO Averaged Number Attempted CIC Classifications-Experience Level.

 

                                                                                                Figure 86.                   EWCO Averaged Times-Fatigue Levels.

 

                                                                                               Figure 87.                   EWCO Averaged Errors-Fatigue Levels.


                                               Figure 88.                   EWCO Averaged Number Attempted CIC Classifications-Fatigue Level.

 

                                                                  Figure 89.                   EWCO Averaged Times-SLQ-32 System Readiness Levels.


                                                                 Figure 90.                   EWCO Averaged Errors-SQL-32 System Readiness Levels.

 

                        Figure 91.                   EWCO Averaged Number Attempted Classifications-SLQ-32 System Readiness Level.

3.�������� Evaluation Results for the Force TAO Watchstander Agent

                                                             Figure 92.                   Force TAO Averaged Times-Situational Awareness Skill Level.

 

                    Figure 93.                   Force TAO Averaged Classifications Errors (Percentage)-Situation Assessment Skill Level.

                                                                                    Figure 94.                   Force TAO Averaged Times-Experience Levels.

 

                                         Figure 95.                   Force TAO Averaged Classification Errors (Percentage) � Experience Level.

                                                                                       Figure 96.                   Force TAO Averaged Times � Fatigue Levels.

 

                                              Figure 97.                   Force TAO Averaged Classification Errors (Percentage) � Fatigue Levels.

 

                                                                               Figure 98.                   Force TAO Averaged Times-Decision-Make Type.

 

                                  Figure 99.                   Force TAO Averaged Classification Errors (Percentage) � Decision-Maker Type.

4.�������� Evaluation Results for the CIC Team Comparison Trials

                                                                                             Figure 100.                 CIC Team Profile Trials Averaged Times.

 

                                            Figure 101.                 CIC Team Profile Trials Averaged # of Classification Errors (Percentage).

 

                                                        Figure 102.                 CIC Team Profile Trials Averaged # of Attempted Classifications.

 


5.�������� Evaluation Results for the SCENARIO WEATHER Trials

                                                                                            Figure 103.                 Scenario Weather Trials Averaged Times.

                                            Figure 104.                 Scenario Weather Trials Averaged # of Classification Errors (Percentage).


                                                Figure 105.                 Scenario Weather Trials Averaged # of Attempted CIC Classifications.


B.������� Air-Defense EXPERT SURVEYS of adc simulation performance

1.�������� Individual and Averaged Survey Results for the RSC Watchstander Questions

                                                                   Figure 106.                 Respondent Survey Results for RSC Simulation Questions.

                                                                      Figure 107.                 Averaged Survey Results for RSC Simulation Questions.

 


2.�������� Individual and Averaged Survey Results for the EWCO Watchstander Questions

                                                               Figure 108.                 Respondent Survey Results for EWCO Simulation Questions.

                                                                  Figure 109.                 Averaged Survey Results for EWCO Simulation Questions.


3.�������� Individual and Averaged Survey Results for the Force TAO Watchstander Questions

 

                                                         Figure 110.                 Respondent Survey Results for Force TAO Simulation Questions.

                                                            Figure 111.                 Averaged Survey Results for Force TAO Simulation Questions.

 

 

 


4.�������� Individual and Averaged Survey Results for CIC Team Questions

 

                                                          Figure 112.                 Respondent Survey Results for CIC Team Simulation Questions.

                                                              Figure 113.                 Averaged Survey Results for CIC Team Simulation Questions.


 

5.�������� Individual and Averaged Survey Results for Additional CIC Team Questions

                                          Figure 114.                 Respondent Survey Results for Additional CIC Team Simulation Questions.

 

 

 

 

                                             Figure 115.                 Averaged Survey Results for Additional CIC Team Simulation Questions.

 

bibliography

�AN/USQ-T46(V) Battle Force Tactical Training System,� FAS Military Analysis Network, 30 June 1999, [http://www.fas.org], January 2003.

Bond, Larry, �Larry Bond�s Harpoon 4� Modern Naval Combat Simulation,� [http://harpoon4.ubi.com/US/Features.htm], January 2003.

Bond, Larry, Harpoon Series� Video Games, Strategic Simulations, Inc., � 1989-2003.

Brooks, M. Evan, �The �Quintessential� Wargamers List for Military Professionals,� 01 January 2001, [http://www.pressroom.com], January 2003.

Burr, R. G., Palinkas, L. A., Banta, G. R., Congleton, M. W., Kelleher, D. L. and Armstrong, C. G., Physical and Psychological Effects of Sustained Shipboard Operations on U.S. Navy Personnel:Naval Health Research Center, San Diego, 1990, p. 4.

Eddy, Mark, F. and Kribs, H. Dewey, Cognitive and Behavioral Task Implications for Three Dimensional Displays Used in Combat Information/Direction Centers, [http://www.isdnet.org], 27 February 1998, p. 8.

Falstein, Noah, Strike Fleet� Video Game, Lucasfilm Games, Ltd., Electronic Arts�, � 1987.

Ferber, Jacques, Multi-Agent Systems:An Introduction to Distributed Artificial Intelligence, Addison-Wesley, 1999, p. 11.

Harney, Robert C., Combat Systems:Volume 1. Sensor Elements, 06 September 2002, pp. 347-349.

Hiles, John, Integrated Asymmetric Goal Organization (IAGO): A Multiagent Model of Conceptual Blending, The MOVES Institute, 2002, p. 10.

�Identification Friend or Foe Systems:Questions & Answers,� October 2002, [http://www.dean-boys.com], January 2003.

Interviews with Air-Defense Experts at AEGIS Training & Readiness Center (ATRC) Detachment, San Diego, Conducted by LT Sharif Calfee, USN, 14-15 August 2002.

Kenyon, Henry, S., �Synthesizing the Big Picture,� Signal, June 2002.

Largent, Andy, �Australian DOD Picks Harpoon 3,� Inside MAC Games, 08 March 2002 [http://www.insidemacgames.com], January 2003.

Liebhaber, Michael J., et al., Naval Air Defense Threat Assessment:Cognitive Factors Model, Office of Naval Research, p. 2.

Liebhaber, Michael J. and Feher, Bela, Air Threat Assessment:Research Model, and Display Guidelines, p. 1.

Luger, George F. and Stubblefield, William A., Artificial Intelligence:Structures and Strategies for Complex Problem Solving, Addison-Wesley Longman, Inc., 1998, pp. 663-664.

Maiorano, Alan G., et al., �A Primer on Naval Theater Air Defense,� Joint Forces Quarterly, Spring 1996, p. 23.

McGaughey, Sean, �Training Systems:Concepts, Technologies and Application,� Digital Systems Resources, Inc., Website, [http://www.simsysinc.com], January 2003.

Morrison, Jeffrey G., Hutchins, Susan, G., et al., Tactical Decision Making Under Stress (TADMUS) Decision Support System, 1996, p. 1.

Mulligan, Robert M., Altom, Mark W. and Simkin, David K., �User Interface Design in the Trenches:Some Tips on Shooting from the Hip�, Association of Computing Machinery, March 1991, pp. 232, 234.

Nielsen, Jakob, �Traditional Dialogue Design Applied to Modern User Interfaces,� Communications of the ACM:Human Factors, Graphical and Multimedia Applications, October 1990, Vol. 33, No. 10, p. 111.

Osga, Glenn, et al., Design and Evaluation of Warfighter Task Support Methods in a Multi-Modal Watch Station, Space and Naval Warfare Systems Center (SPAWAR), San Diego, May 2002, p. iii.

Prensky, Marc, �True Believers:Digital Game-Based Learning in the Military,� Digital Game-Based Learning, McGraw-Hill, 2001, p. 2.

Rose, Jim, Fifth Fleet� Video Game, The Avalon Hill Game Company, Stanley Associates, � 1994.

Slabodkin, Gregory, �Navy App Unites Commanders:Planning Tool Gives Joint Commanders Data to Counter Air and Missile Attacks,� Government Computer News, 12 October 1998, p.46.

Wellbrink, Joerg and Darken, Rudolph, Sustained Attention Modeled as a Complex Adaptive System, MOVES Institute, p. 1.

INITIAL DISTRIBUTION LIST

1.                  Defense Technical Information Center

Ft. Belvoir, Virginia

 

2.                  Dudley Knox Library

Naval Postgraduate School

Monterey, California

 

3.                  Commander

����������� Space and Naval Warfare Systems Center, San Diego

San Diego, California

 

4.                  Commanding Officer

����������� AEGIS Training & Readiness Center, Dahlgren

Dahlgren, Virginia

 

5.                  Officer-In-Charge

����������� AEGIS Training & Readiness Center, Detachment San Diego

San Diego, California

 

6.                  Susan Chipman, Ph.D.

Office of Naval Research, Code 342

Arlington, Virginia

 

7.                  Ralph E. Chatham, Ph.D.

����������� Defense Advanced Research Projects Agency (DARPA)

Arlington, Virginia

 

8.                  Commander, Naval Sea Systems Command

Washington, D.C.

 



[1] Maiorano, Alan G., et al., �A Primer on Naval Theater Air Defense,� Joint Forces Quarterly, Spring 1996, p. 23.

[2] Maiorano, p. 23.

[3] Maiorano, p. 24.

[4] Mairoano, p. 25.

[5] Mairoano, p. 26.

[6] Ferber, Jacques, Multi-Agent Systems:An Introduction to Distributed Artificial Intelligence, Addison-Wesley, 1999, p. 11.

[7] Ferber, p. xv.

[8] Ferber, p. 11.

[9] Slabodkin, Gregory, �Navy App Unites Commanders:Planning Tool Gives Joint Commanders Data to Counter Air and Missile Attacks,� Government Computer News, 12 October 1998, p. 46.

[10] Ibid., p .45.

[11] Kenyon, Henry S., �Synthesizing the Big Picture,� Signal, June 2002.

[12] Morrison, Jeffrey G., Hutchins, Susan, G., et al., Tactical Decision Making Under Stress (TADMUS) Decision Support System, 1996, p. 1.

[13] Ibid., p. 2.

[14] Ibid., p. 2.

[15] Ibid., p. 2.

[16] Ibid., p. 2.

[17] Osga, Glenn, et al., Design and Evaluation of Warfighter Task Support Methods in a Multi-Modal Watch Station, Space and Naval Warfare Systems Center (SPAWAR), San Diego, May 2002, p. iii.

[18] Ibid.

[19] Ibid.

[20] Liebhaber, Michael J., et al., Naval Air Defense Threat Assessment:Cognitive Factors Model, Office of Naval Research, p. 2.

[21] Ibid.

[22] Ibid., p. 22.

[23] Liebhaber, Michael J. and Feher, Bela, Air Threat Assessment:Research Model, and Display Guidelines, p. 1.

[24] Ibid., p. 4.

[25] Ibid., pp. 7-9.

[26] Eddy, Mark, F. and Kribs, H. Dewey, Cognitive and Behavioral Task Implications for Three Dimensional Displays Used in Combat Information/Direction Centers, 27 February 1998, p. 8, [http://www.isdnet.org], September 2002.

[27] Ibid., p. 9.

[28] �AN/USQ-T46(V) Battle Force Tactical Training System,� FAS Military Analysis Network, 30 June 1999, [http://www.fas.org], January 2003.

[29] McGaughey, Sean, �Training Systems:Concepts, Technologies and Application,� Digital Systems Resources, Inc. Website, [http://www.simsysinc.com], January 2003.

[30] Falstein, Noah, Strike Fleet� Video Game, Lucasfilm Games, Ltd., Electronic Arts�, � 1987.

[31] Rose, Jim, Fifth Fleet� Video Game, The Avalon Hill Game Company, Stanley Associates, � 1994.

[32] Bond, Larry, Harpoon Series� Video Games, Strategic Simulations, Inc., � 1989-2003.

[33] Bond, Larry, �Larry Bond�s Harpoon 4� Modern Naval Combat Simulation,� [http://harpoon4.ubi.com/US/Features.htm], January 2003.

[34] Largent, Andy, �Australian DOD Picks Harpoon 3,� Inside MAC Games, 08 March 2002, [http://www.insidemacgames.com], January 2003.

[35] Prensky, Marc, �True Believers:Digital Game-Based Learning in the Military,� Digital Game-Based Learning, McGraw-Hill, 2001, p. 2.

[36] Brooks, M. Evan, �The �Quintessential� Wargamers List for Military Professionals,� 01 January 2001, [http://www.pressroom.com], January 2003.

[37] Mulligan, Robert M., Altom, Mark W. and Simkin, David K., �User Interface Design in the Trenches:Some Tips on Shooting from the Hip�, Association of Computing Machinery, March 1991, p. 232.

[38] Nielsen, Jakob, �Traditional Dialogue Design Applied to Modern User Interfaces,� Communications of the ACM:Human Factors, Graphical and Multimedia Applications, October 1990, Vol. 33, No. 10, p. 111.

[39] Mulligan, Robert M., Altom, Mark W. and Simkin, David K., �User Interface Design in the Trenches:Some Tips on Shooting from the Hip�, Association of Computing Machinery, March 1991, p. 234.

[40] Ibid.

[41] Ferber, Jacques, Multi-Agent Systems:An Introduction to Distributed Artificial Intelligence, Addison-Wesley, 1999, p. 11.

[42] Ferber, p. xv.

[43] Ferber, p. 11.

[44] Ferber, p. 67.

[45] Ferber, p. 19.

[46] Ferber, p. 52.

[47] Ferber, p. 70.

[48] Ibid.

[49] Hiles, John, Integrated Asymmetric Goal Organization (IAGO): A Multiagent Model of Conceptual Blending, The MOVES Institute, 2002, p.10.

[50] Interviews with Air-defense Experts at AEGIS Training & Readiness Center (ATRC) Detachment, San Diego, Conducted by LT Sharif Calfee, USN, 14-15 August 2002.

[51] Ibid.

[52] Burr, R. G., Palinkas, L. A., Banta, G. R., Congleton, M. W., Kelleher, D. L. and Armstrong, C. G., Physical and Psychological Effects of Sustained Shipboard Operations on U.S.. Navy Personnel:Naval Health Research Center, San Diego, 1990, p. 4.

[53] Interviews with Air-Defense Experts at AEGIS Training & Readiness Center (ATRC) Detachment, San Diego, Conducted by LT Sharif Calfee, USN, 14-15 August 2002.

[54] Harney, Robert C., Combat Systems:Volume 1. Sensor Elements, 06 September 2002, pp. 347-349.

[55] �Identification Friend or Foe Systems:Questions & Answers,� [http://www.dean-boys.com], January 2003.

[56] Luger, George F., Stubblefield, William A., Artificial Intelligence:Structures and Strategies for Complex Problem Solving, Addison-Wesley Longman, Inc., 1998, pp. 663-664.

[57] Wellbrink, Joerg and Darken, Rudolph, Sustained Attention Modeled as a Complex Adaptive System, MOVES Institute, p. 1.