NAVAL

POSTGRADUATE

SCHOOL

 

 

MONTEREY, CALIFORNIA

 

THESIS

 

A Test Methodology for Reliability Assessment of Collaborative Tools

 

by

 

Brenda Joy Powers

 

September 2004

 

 

�� Thesis Advisor:� ��������������������������������������� Mantak Shing

�� Thesis Co-Advisor:����������������������������������� Neil Rowe


Approved for public release; distribution is unlimited


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

THIS PAGE INTENTIONALLY LEFT BLANK



��������� REPORT DOCUMENTATION PAGE

Form Approved OMB No. 0704-0188

Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instruction, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to Washington headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington, VA 22202-4302, and to the Office of Management and Budget, Paperwork Reduction Project (0704-0188) Washington DC 20503.

1. AGENCY USE ONLY (Leave blank)

 

2. REPORT DATE

September 2004

3. REPORT TYPE AND DATES COVERED

Master�s Thesis

4. TITLE AND SUBTITLE:A Test Methodology for Reliability Assessment of Collaborative Tools

5. FUNDING NUMBERS

 

6. AUTHOR(S)� Brenda Joy Powers

7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES)

Naval Postgraduate School

Monterey, CA� 93943-5000

8. PERFORMING ORGANIZATION REPORT NUMBER���

9. SPONSORING /MONITORING AGENCY NAME(S) AND ADDRESS(ES)

N/A

10. SPONSORING/MONITORING

����� AGENCY REPORT NUMBER

11. SUPPLEMENTARY NOTES� The views expressed in this thesis are those of the author and do not reflect the official policy or position of the Department of Defense or the U.S. Government.

12a. DISTRIBUTION / AVAILABILITY STATEMENT

Approved for public release; distribution is unlimited

12b. DISTRIBUTION CODE

 

13. ABSTRACT (maximum 200 words)

In the past ten years, military operations, as now evident in Iraq, involve both joint- allied and coalition forces.� The evolving joint- and coalition-warfare environment presents coordination challenges.� Collaborative tools can ease the difficulties in meeting these challenges by enabling highly interactive work to be performed by individuals not necessarily geographically co-located. Collaborative tools will revolutionize the manner in which distributed warfighters interact and inform each other of the mission-planning progress and situation assessment.� These systems allow warfighters to integrate tactical information with key combat-support logistics data in both joint- and coalition-warfare environments.�

Countless collaboration tools and knowledge management systems exist today. Unfortunately, industry has developed these tools and systems for use primarily in exclusive communities of interests, services or agencies. The end result is a proliferation of tools that have not been designed to operate under all network conditions.� Since network conditions are not standardized in the joint- and coalition-warfare environment, it is necessary to determine if a collaborative tool can perform under limited-bandwidth and latency conditions.� Currently, there are neither evaluation criteria nor methodologies for evaluating collaborative tools with respect to performance reliability.� This thesis proposes a test methodology for evaluation of performance reliability of collaborative tools, and demonstrates the effec�tiveness of the methodology with a case study of the performance evaluation of the InfoWorkSpace collaborative tool.

14. SUBJECT TERMS� Collaboration, Collaborative Tools, Software Reliability, Performance, Bandwidth, Latency, Network, DISA, CMO, DIA, JIVA, USJFCOM, CIE, Communications Resiliency, Test Case,� IWS, DCTS, DoD, COTS, MCU, NGCS,XML,� GROOVE, WEBBE, GDSS, Groupware

15. NUMBER OF PAGES

69

16. PRICE CODE

17. SECURITY CLASSIFICATION OF REPORT

Unclassified

18. SECURITY CLASSIFICATION OF THIS PAGE

Unclassified

19. SECURITY CLASSIFICATION OF ABSTRACT

Unclassified

20. LIMITATION OF ABSTRACT

 

UL

NSN 7540-01-280-5500��������������������������������������������������������������������������������������������������������� Standard Form 298 (Rev. 2-89)

����������������������������������������������������������������������������������������������������������������������������������������������� Prescribed by ANSI Std. 239-18


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

THIS PAGE INTENTIONALLY LEFT BLANK

 


Approved for public release; distribution is unlimited

 

 

A Test Methodology for Reliability Assessment of Collaborative Tools

 

Brenda Joy Powers

B.A., Point Loma Nazarene University, 1984

 

 

Submitted in partial fulfillment of the

requirements for the degree of

 

 

MASTER OF SCIENCE IN SOFTWARE ENGINEERING

 

 

from the

 

 

NAVAL POSTGRADUATE SCHOOL

September 2004

 

 

 

Author:����������������������� Brenda Joy Powers

 

 

Approved by:������������ Mantak Shing

Thesis Advisor

 

 

Neil Rowe

Thesis Co-Advisor

 

 

 

Peter Denning

Chairman, Department of Computer Science


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

THIS PAGE INTENTIONALLY LEFT BLANK


ABSTRACT

 

 

In the past ten years, military operations, as now evident in Iraq, involve both joint-Allied and coalition forces.� The evolving joint- and coalition-warfare environment presents coordination challenges.� Collaborative tools can ease the difficulties in meeting these challenges by enabling highly interactive work to be performed by individuals not necessarily geographically co-located. Collaborative tools will revolutionize the manner in which distributed warfighters interact and inform each other of the mission-planning progress and situation assessment.� These systems allow warfighters to integrate tactical information with key combat-support logistics data in both joint- and coalition-warfare environments.�

����������� Countless collaboration tools and knowledge management systems exist today. Unfortunately, industry has developed these tools and systems for use primarily in exclusive communities of interests, services or agencies. The end result is a proliferation of tools that have not been designed to operate under all network conditions.� Since network conditions are not standardized in the joint- and coalition-warfare environment, it is necessary to determine if a collaborative tool can perform under limited-bandwidth and latency conditions.� Currently, there are neither evaluation criteria nor methodologies for evaluating collaborative tools with respect to performance reliability.� This thesis proposes a test methodology for evaluation of performance reliability of collaborative tools, and demonstrates the effec�tiveness of the methodology with a case study of the performance evaluation of the InfoWorkSpace collaborative tool.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

THIS PAGE INTENTIONALLY LEFT BLANK

 


TABLE OF CONTENTS

 

I.��������� INTRODUCTION.. 1

A.������� background.. 1

B.������� thesis organization.. 2

II.�������� The art of collaboration.. 3

A.������� overview... 3

1.�������� Resources. 4

2.�������� Teamwork. 4

B.������� collaborative software systems.. 5

C.������� military use of collaborative technology.. 7

iII.������� survey of collaborative tools.. 9

A.������� Information workspace (iws) 9

1.�������� Overview.. 9

2.�������� Military Application.. 10

B.������� Defense collaboration tool suite (dcts) 10

1.�������� Overview.. 10

2.�������� Military Application.. 11

C.������� groove workspace.. 11

1.�������� Overview.. 11

2.�������� Military Application.. 12

D.������� Webbe.. 12

1.�������� Overview.. 12

2.�������� Military Application.. 13

iv.������ reliability assessment of collaborative tools.. 15

A.������� software reliability.. 15

B.������� performance: a reliability attribute.. 15

C.������� related research.. 16

D.������� test methodology for the assessment of performance reliability of collaborative tools.. 17

V.������� application of test methodology: iws case study.. 19

A.������� test objective.. 19

B.������� Domain Requirements.. 19

C.������� performance objectives.. 20

D.������� test environment.. 20

1.�������� Hardware Configuration.. 21

2.�������� Software Configuration.. 23

E.������� test cases.. 24

F.�������� scenarios.. 24

G.������� test execution and performance results.. 25

1.�������� Startup.. 25

2.�������� Keep-Alive (Maintenance of Bandwidth) 26

3.�������� Communications Resiliency. 27

4.�������� Chat 30

5.�������� File Transfer. 33

6.�������� Whiteboard.. 35

7.�������� Video.. 37

8.�������� Audio.. 38

9.�������� Remote Room Login.. 42

10.������ Combined Applications. 42

11.������ Shared View.. 43

H.������� Assessment of IWS Performance.. 45

VI.������ conclusion.. 49

bibliography.. 51

INITIAL DISTRIBUTION LIST.. 53

 

 

 

 

 

 

 

 

 

 


LIST OF FIGURES

 

 

 

Figure 1:�������� Collaboration Model........................................................................................ 4

Figure 2:�������� WEBBE Architecture.................................................................................... 13

Figure 3:�������� Testbed Architecture..................................................................................... 21

Figure 4:�������� Startup............................................................................................................ 26

Figure 5:�������� Keep Alive Bits per Second......................................................................... 27

Figure 6:�������� Resiliency Bits loaded upon re-Federation................................................ 30

Figure 7:�������� Chat Test Results.......................................................................................... 32

Figure 8:�������� File Transfer Test Results............................................................................. 34

Figure 9:�������� File Transfer Test Results With Varying Bandwidth.................................. 35

Figure 10:����� Whiteboard Test Results.............................................................................. 37

Figure 11:����� Video Test Results........................................................................................ 38

Figure 12:����� Audio Bits Test Results................................................................................ 41

Figure 13:����� Combined Applications Test Results.......................................................... 43

Figure 14:����� Shared Viewing Test Results....................................................................... 44

 

 

 

 

 

 

 


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

THIS PAGE INTENTIONALLY LEFT BLANK


LIST OF TABLES

 

 

 

Table 1:�������� Hardware Configuration. 22

Table 2:��������� Test Case Scenarios. 24

Table 3:��������� Chat Test Cases and Results. 31

Table 4:��������� File Transfer Test Cases and Results. 33

Table 5:��������� Audio Test Cases and Results. 40

Table 6:��������� Summary of� IWS Performance Test Results. 47

 


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

THIS PAGE INTENTIONALLY LEFT BLANK


ACKNOWLEDGMENTS

 

 

I wish to thank my advisors Dr. Man-Tak Shing and Dr. Neil Rowe for their extensive support and encouragement. I appreciate greatly the time they spent in helping me achieve this goal.�� I would like to thank Bret Michael for his instruction in software testing.� I also want to thank my good friend and favorite professor Richard Riehle for his excellent instruction and his positive attitude, which encouraged me many times along this journey, not to give up, but to keep pressing on to obtain this Master�s degree.�

I would like to extend a special thanks Dr. LorRaine Duffy, Ed Monahan and the entire EADC team for their support in facilitating the implementation of this thesis.� I also would like to thank Dr. Nikhil Dave for the use of his WAN analyzer and Dr. Gary McCown for providing a testbed environment, which facilitated application testing opportunities.

I want to thank my parents Bill and Joyce Powers for their countless prayers and support.� I especially want to thank my Aunt,� Janet Boyle, who from the very first day I began this adventure, has been my primary source of encouragement, advising� me to take one quarter at a time and keep focused on the goal of� completing this Master�s degree.�

And most importantly I would like to thank the Lord for giving me the strength and endurance to complete this Master�s degree amidst the most trying of circumstances, and for providing for all of my needs according to his riches in Glory in Christ Jesus my Lord.

 

 

 


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

THIS PAGE INTENTIONALLY LEFT BLANK


I.                   INTRODUCTION

A.                 background

With the increase of Joint Force and Coalition participation in recent operations and exercises, the military is seeking a way to support a new way of doing business.� The ability of geographically dispersed team members to communicate and collaborate� has become of paramount interest.� Collaborative software systems also referred to as collaborative tools, which offer capabilities such as chat, video conferencing, document sharing and audio conferencing, are being looked to as the solution to support this new business paradigm.� Numerous collaborative tools are available and they vary significantly in the capabilities provided.� This makes it a challenging task to select the collaborative tool that meets the performance requirements of all organizations involved.� For example, network environments are not standardized across the Joint or Coalition Communities.�� In fact, U.S. troops face a bandwidth shortage that dictates where ships are sent, when drones can fly, and what kind of messages sailors and soldiers can receive.�� In military-planning operations, where exchange and coordination of information between dispersed team members is necessary, collaborative tools must produce results within acceptable time intervals.

Thus, evaluating the performance reliability of a collaborative tool is more critical than ever, as is determining if a collaborative tool can perform under limited-bandwidth and latency conditions.� Currently, there are not any standard evaluation criteria or test methodologies for evaluating the performance of collaborative tools.�

This thesis presents the author�s test methodology for the evaluation of performance reliability of collaborative tools.� This thesis includes application of the test methodology during the test and evaluation of IWS performance characteristics and reports the results.�

 

B.                thesis organization

This chapter gave a brief introduction to the problem and the motivation of the research.� Chapter II presents an overview of collaboration, collaborative tools, and their use in military operations. In Chapter III, four collaborative tools currently used in military operations are discussed.� Chapter IV presents a detailed description of the author�s test methodology.� Chapter V documents how the test methodology was used to evaluate the performance of a particular collaborative tool and presents the results collected.� Chapter VI contains a conclusion and recommendations on future work.�

II.������ The art of collaboration

A.������� overview

Human collaboration has been a subject of philosophy and social science throughout History [JWOD86]. Many disciplines, such as sociology and anthropology are concerned with how people live and work together.� Collaboration is defined by Webster as, �The act of working together united in labor�. [WEB1913]

Collaboration is the basis for bringing together the knowledge, experience and skills of multiple team members to the development of a new product or plan, in a way that is more effective than individual team members performing narrow tasks individually.� Gutwin and Greenberg [GG00] have defined the �mechanics of collaboration� as the basic operations that must be accomplished to achieve a shared task. Another view [PG03] identifies four categories of mechanics: explicit communication and information gathering (for communication) and management of shared access and transfer (for coordination).�� Explicit communication is fundamental to collaboration and involves three types: spoken, written and gestural. The other category for communicating is information gathering, which can involve five types of information: basic group awareness (who is working and what they are working on); activity information from objects (seeing the impact of manipulation on objects); activity information from people�s bodies (watching how people act); visual evidence; and overhearing others� explicit communications.� Management of shared access concerns the manner in which objects are accessed and used.� There are three activities that must be considered with respect to this:� obtaining a resource, reserving a resource for future use, and protecting someone�s work.� Coordination may involve the transfer of objects and tools between individuals to ensure that a task is divided.� Two activities are associated with this: handoff (when an object or tool is transferred) and deposit (when a resource is put in a particular place to retrieve it later).

 

The following sections discuss two other important aspects of collaboration.

1.�������� Resources

A collaborative process requires adequate resources so that team members have time to effectively collaborate. For example, too often an engineer or warfighter is assigned to a team, but is so busy fighting fires or responding to crises that he or she does not have the time to effectively collaborate. Management must provide adequate time and money to support collaboration.

2.�������� Teamwork

Effective teamwork is required for collaboration. Team members must trust and respect one another. There must be a willingness to accept input from others and open communication.� There are often conflicting goals, so decision-making must be collaborative. This is shown in Figure 1. [KC02]

Figure 1:������ Collaboration Model

 

This Figure shows two axes, Cooperativeness and Assertiveness.� A low degree of assertiveness and cooperativeness represents avoidance of an issue or the approach of "I don�t care". A high degree of cooperativeness and a low degree of assertion represents accommodation; a high degree of assertiveness and a low degree of cooperativeness represent competition.�� Compromise, the approach of "Sometimes I win and sometimes I lose", represents a moderate degree of both assertiveness and cooperativeness; a high degree of cooperation and assertiveness represents the basis for a "win-win" collaborative approach.�� The key to the latter is to creatively search for solutions that can mutually satisfy the needs of the team rather than focusing on competing solutions that involve tradeoffs or are mutually exclusive.

 

B.������� collaborative software systems

Collaborative software systems or tools facilitate the sharing of information and resources among individuals across geographic and temporal boundaries. These tools should accommodate all variations of interpersonal and group interactions, including one-to-one, one-to-many, many-to-one and many-to-many.

Ideally, the tools should be dynamic in nature and have the flexibility to support formal, informal and ad hoc collaborations. They must also be very natural and intuitive to the user as to where to find collaborators and how to interact with them. A collaborative tool should accommodate real-life situations, such as interruptions, and still ensure work can be resumed seamlessly.

In collaborative software systems, at every stage of the task, information is being processed. These processes -- information gathering, information sharing and information transfer -- can be thought of as a continuing cycle which involves identification of information needs, information acquisition, information organization and storage, information distribution, and information use [CC03].� These processes [PG03] occur continually in a collaborative software system.

Collaborative tools provides value for an organization by enabling dispersed members of a work group to communicate, share knowledge, and develop plans and products.� It has been shown that collocation facilitates communication and collaboration. Physical collocation is the best, but virtual collocation through communication mechanisms and collaborative tools is the next best alternative with dispersed team members. Collaboration can happen synchronously where all participants view information and/or meet at the same time, or asynchronously where participants view information and provide feedback at different points in time. Collaborative tools become increasingly valuable as more people use it. For example, calendaring becomes more useful when more people are connected to the same electronic calendar and choose to keep their individual calendars up-to-date. [WIQ]

The selection and use of collaborative tools will be based on technology availability and cost; geographic dispersion and related time zone differences; need for access by partners, suppliers and customers; product complexity and degree of technical issues; and other factors.

There are three different categories of collaborative tools:

1.                  Electronic communication tools send messages, files, data, or documents between people and hence facilitate the sharing of information. Some examples are e-mail, faxing, voice mail, and Web publishing.�

2.                  Electronic conferencing tools facilitate the sharing of information in a more interactive way. Examples include data conferencing, voice conferencing, video and audio conferencing, discussion forums, a virtual discussion platform to facilitate and manage online text messages, chat rooms, and electronic meeting systems (EMS).

3.                  Collaborative management tools facilitate and manage group activities. Examples include electronic calendars (time management software) that schedule events and automatically notify and remind group members, project-management systems that schedule, track, and chart the steps in a project as it is being completed, workflow systems that provide collaborative management of tasks and documents within a business process, and knowledge-management systems that collect, organize, manage, and share information.

 

C.������� military use of collaborative technology

Collaboration has become a buzzword in the military community these days. The promise is enticing, getting warfighters working together online to solve problems faster and more responsively to customer needs. It is also hoped to perhaps save on travel and communications costs in the process.

Migration to collaborative tools is inevitable, as in the near future, warfighters will be distributed among land-based (physical or geo-located) command centers, mobile (e.g., ship/submarine-board or vehicle-based) command centers, and, more importantly, virtual command centers which have no physical counterpart.� How these distributed warfighters interact and inform each other of mission planning progress and situation assessment, and how they establish their battle rhythm [JDPO02], is of utmost importance.�� Collaborative tools could significantly improve the effectiveness of personnel distributed among physical and virtual command centers.

 


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

THIS PAGE INTENTIONALLY LEFT BLANK

iII.����� survey of collaborative tools

This section provides an overview of some collaborative tools and how they are used in military operations today.

 

A.������� Information workspace (iws)

1.�������� Overview

InfoWorkSpace (IWS) is a commercial-off-the-shelf (COTS) client-server software solution that is comprised of a suite of collaborative tools that facilitate enterprise communication, data access, and knowledge management [IWS04]. IWS is comprised of several third-party products including Placeware/Microsoft, Oracle, IPlanet/Sun One Directory Server and Web Server, and the Tomcat Servlet Engine/Apache.

IWS provides a secure virtual office organized into buildings, floors and rooms where users can build online meeting places to interact on projects in real-time.  Accessed via a Web browser or Java client, it includes a number of features, including an instant-messaging client (LaunchPad), text chat (public and private), audio, Web video, application casting, desktop conferencing, Virtual File Cabinet, a bulletin board, Collaborative Whiteboard and shared Text Tool, threaded discussions (news groups), mail, and a calendar.�

IWS can federate servers, which allows remote-client login to another server via authentication through the client�s home server.� User profiles (such as briefcase location) are stored on their home servers, and file cabinets are stored on their respective servers.  Batch synchronization between federated servers is performed whenever a server comes online or joins the federation. Incremental synchronization is performed in real time as an account is updated.  This includes inputs from the Lightweight Directory Access Protocol (LDAP) directory server, user and group administration panels, and user login.� Data synchronized among federated peers includes User/Group ID, Home Host ID, Name, Distinguished Name (LDAP), Account Last Modified, User Display Name, First Name, Last Name, Email, Title, Password, Certificates, Last Login, Number of Login Failures, Locked Flag and Group Members.

2.�������� Military Application

Combatant commands deploying InfoWorkSpace as part of the Collaborative Information Environment (CIE) include the US Pacific Command, the US Southern Command, the US Central Command, and the US European Command, with other combatant commands scheduled for deployment later in 2004. The United States Joint Force Command (USJFCOM) also coordinated the installation required for InfoWorkSpace to be used in support of Operation Iraqi Freedom.

Sponsored by the Defense Intelligence Agency (DIA), the Joint Intelligence Virtual Architecture (JIVA) touches not only all direct U.S. intelligence agencies, but also each individual branch of the U.S. military. Through the JIVA program, Ezenia's collaborative software solution is deployed worldwide to thousands of users, enabling globally dispersed organizations within the intelligence community to function more effectively using InfoWorkSpace. Since 1998, JIVA's usage within the U.S. Intelligence Community and the military's major commands has enabled InfoWorkSpace to gain further deployments in many branches of the government, including the U.S. Air Force, Army, Navy and Marine Corps. [EZ04]

 

B.������� Defense collaboration tool suite (dcts)

1.�������� Overview

The Defense Collaboration Tool Suite (DCTS) is an integrated set of applications providing interoperable, synchronous and asynchronous collaboration to U.S. Department of Defense�s (DoD) agencies, combatant commands, and military services [DCTS04]. DCTS has voice and video conferencing, document and application sharing, chat facilities, whiteboard facilities, and virtual-workspace sharing. It is not a single product but an evolving set of open standards. It has a client/server architecture comprised of client workstations connected via a network to centralized servers.� The client applications manage all local data processing, user interface, and data export to other client-based commercial-off-the-shelf (COTS) software.� DCTS Version 2.0 Phase I includes:

                    Microsoft NetMeeting: provides Windows users with multi-point data conferencing, text chat, whiteboard, and file transfer, as well as point-to-point audio and video.�

                    Asynchrony Envoke:� provides users of different systems with awareness or presence of other users, spaces and meetings.

                    Sun Microsystems SunForum:� provides shared applications and conferencing for PC and UNIX operating systems.

                    Digital Dash Server: provides space navigation, awareness, shared file space, access control, VTC conference joins, broadcast messages and system-administration services.

                    First Virtual Communications MultiPoint Control Unit (MCU): provides multipoint NetMeeting sessions.

                    Microsoft SQL Server: provides a database and analysis for�
e-commerce, line-of-business products, and data warehousing.

 

2.                  Military Application

The DCTS standard suite and several certified collaboration tools were deployed during operations in Afghanistan and Iraq in support of both deliberate and crisis action planning. As of January 2004, DCTS V2 P1 is installed at 138 sites worldwide, at all combatant commands, major components and services, with another 218 planned for 2004. DCTS will remain in place until the Next Generation Collaboration Service (NGCS) is on-line in 2005 or 2006.

 

C.������� groove workspace

1.�������� Overview

Groove is an application to facilitate collaboration and communication among small groups [GRV04]. It is a commercial product invented by Lotus Notes creator Ray Ozzie. A Groove user creates a "workspace" and invites other people into it. Each person who responds to an invitation becomes a member of

that workspace and is sent a copy that is installed on his or her hard drive. When any one member makes a change to their copy, that change is sent to all copies for update.

Groove's basic services (including security, messaging, store-and-forward delivery, firewall transparency, ad hoc group formation, and change notification) may be customized with tools. These include a calendar, discussion support, file sharing, an outliner, pictures, a notepad, a sketchpad, and a Web browser.

2.�������� Military Application

Groove's software connects intelligence agents working in offices around the world.� "Groove has always met the security requirements we deal with in our intelligence work, and over the years the performance has become faster and requires less bandwidth," says Kevin Newmeyer, program director for the Inter-American Committee Against Terrorism in Washington, D.C.�

 

D.������� Webbe

1.                  Overview

The WEBBE� MX Server is designed for real-time messaging and collaboration. It provides a highly reliable, scalable, and easy-to-manage infrastructure [WEB04].� Its n-tier design combines the reliability and scalability of SQL with the accessibility and openness of Internet Information Services (IIS).�� It includes a real-time messaging service built on a secure standards-oriented "federated" architecture. The architecture is a peer-to-peer design that is� distributed and modular with no single point of failure (see Figure 2).It provides presence awareness, intelligent routing, and guaranteed message delivery to member servers within a federation. A single federation can support up to 256 member servers.� It uses the XML Distributed Architecture which distributes real-time messages on the IP network� rather than centralizing applications at the network core.� All WEBBE messages are considered delivered only when retrieved and explicitly acknowledged by the destination client; if no acknowledgement is received, the server will store the message for later delivery.�

Figure 2:������ WEBBE Architecture

 

2.                  Military Application

In June of 2003 Webbe successfully passed the DoD interoperability certification test for inclusion into DCTS. [WEB04]� A GSA contract was awarded in May 2004 to upgrade the Webbe Instant Messaging Tool Software Server/Client to support the Special Operations Mission Planning Environment (SOMPE) mission. Other commercially available instant messaging tools do not provide the voice instant messaging and highly compressed audio needed for this application.

 


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

THIS PAGE INTENTIONALLY LEFT BLANK

iv.���� reliability assessment of collaborative tools

A.                 software reliability

IEEE 610.12-1990 defines reliability as �The ability of a system or component to perform its required functions under stated conditions for a specified period of time.�� [IEEE90].� Classical reliability theory generally deals with hardware. But the reliability of software should be measured and evaluated as it is for hardware.�

While the author�s focus is on the importance of measuring the reliability of collaborative tools in terms of performance, it is worth noting that measuring the usability of collaborative tools is the most researched.� Usability addresses the relationship between tools and their users. In order for a tool to be effective, it must allow intended users to accomplish their tasks in the best way possible. Usability is one of the focuses of the field of human-computer Interaction. As the name suggests, usability has to do with bridging the gap between people and machines.� The Design of Everyday Things, by Don Norman is a book that talks about the importance of usability in design.� It doesn�t focus on computers, but about all kinds of other things that we suffer from every day (i.e doors that pull open but they look like they should push open) that are examples of bad user interface design.�

 

B.                performance: a reliability attribute

Performance can be characterized by three criteria: throughput, response time, and utilization.� Throughput is the number of tasks accomplished per unit of time, for example the transactions per second.� Response time is the time elapsed between input arrival and delivery of output.�� Utilization is the percentage of time a component is busy. [BR00]

There are many different models for software quality, but in almost all models reliability is incorporated. ISO 9126 [1991] defines six quality characteristics, one of which is reliability.

C.                related research

As early as 1988, people began evaluating technology for use in collaboration.� Some of the first collaborative tools evaluated were group decision support systems (GDSS).� In their paper Kraemer and King [KK88] review several GDSS that have been configured to meet the needs of groups at work, and conduct experiments.� They also present an assessment of GDSS development and use in the United States, and trace the evolution of GDSS to support activities other than decision-making, including communication and information processing.

There have been a number of studies related to evaluating both collaborative software and groupware.� Groupware is any type of software designed for groups and for communication that integrates work on a single project by several concurrent users at separated workstations.�� It was pioneered by Lotus Software with the popular Lotus Notes application running in connection with a Lotus Domino server.� Studies have been done to determine how, and if, inspection methods complement field methods for evaluating groupware. In particular, studies aim to discover what kind of usability problems exist, and whether inspection methods can provide an overall assessment of the usability of a system.� To explore these issues, a user-based study of how collaborators used the Teamwave Workplace (TW) groupware tool was performed and the results are reported in a research paper [SMGG01].�

Many collaborative software tools have been developed in the recent years to accelerate the growing interest of many organizations to become learning organizations.� A very interesting paper on the application of Bayesian Networks as an evaluation methodology to rate the suitability of a given collaborative tool in supporting the mental model concept of organizational learning has been written to discuss this [ERF02].�

 

 

D.                test methodology for the assessment of performance reliability of collaborative tools

A number of methods are used to evaluate the reliability and performance of collaborative tools.� The development of a test methodology was complicated by the lack of standards or specifications that describe what collaborative tools should do to assist in military operations. The author�s test methodology used for evaluating the performance reliability of collaborative tools consists of the following steps:

1.                  Define the test objective.

2.                  Identify the requirements of the domain or mission for which the collaborative tool will be used.

3.                  Identify performance objectives and define specific parameters for assessment.

4.                  Establish a test architecture (testbed) based on operational requirements and performance criteria.

5.                  Document the system configuration.

6.                  Develop test cases for collaborative-tool capabilities based on performance criteria.

7.                  Identify relevant scenarios for evaluating performance during test-case execution.

8.                  Execute tests.

9.                  Report the test results.


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

THIS PAGE INTENTIONALLY LEFT BLANK

 

V.               application of test methodology: iws case study

The following sections provide an overview of how the test methodology was applied in evaluating the collaborative tool, InfoWorkSpace (IWS) version 2.5.1.2, by Ezenia.

 

A.                 test objective

Bandwidth is a critical issue for U.S. Military units. Over-subscribed satellite resources, line of site radio limitations, and legacy switching constraints all contribute to severe bandwidth limitations.� The Navy especially faces challenges with connectivity due to bandwidth limitations and occasional loss of connectivity with the satellites they use.� According to GlobalSecurity.org analyst Patrick Garrett, the fact that every message is transmitted electronically -- from maintenance supply requests to food orders to letters back home -- makes the Navy's network traffic jam even worse. [SN03]�

This research was sponsored by Defense Information Systems Agency (DISA)� Collaboration Management Office (CMO) to investigate the performance characteristics of� IWS.�� These tests were executed in the SPAWAR SYSTEMS CENTER SAN DIEGO Reconfigurable Land-Based Test Site (SSC SD RLBTS) lab, and examined the� products� bandwidth utilization, throughput, and response time for each feature.�

 

B.                Domain Requirements

This research was performed to determine to what extent IWS can be used when limited bandwidth conditions exist in ship to ship communications. The requirements identified in support ship-to-ship communications are as follows:

                    The collaborative tool shall be able to recover if communication is broken � resiliency.

                    When not in use, the collaborative tool must not use excessive bandwidth � keep alive or feature sustainment.

                    Features of the collaborative tool shall be able to function under real world bandwidth limitations and delay/congestion conditions aboard ships.

 

C.                performance objectives

The performance objectives of the testing were to characterize the reliability of each IWS feature with respect to communications resiliency, sustainment capability, bandwidth use, throughput, and to determine quality limitations (degradation points)� for each capability.The parameters chosen were bandwidth and� latency ranges of 256 kbps� to 10 ms. These were based on real world conditions aboard ship.�

The features we considered most important to measure were chat, file upload, whiteboard and audio, since they are used for tasks such as brainstorming about a briefing, preparing the briefing, analyzing imagery, and creating logistical or operational plans.

 

D.                test environment

In accordance with step four of the test methodology, a testbed architecture was established to model the operational requirements for ship-to-ship collaboration at a reduced bandwidth.� The testbed was set up in the RLBTS Satellite lab located at SSC SD, Building 606, Lab 339.� IWS was setup using a simulated platform-to-platform network configuration, with two domains as shown in Figure 4. The testbed architecture included two servers representing the two platforms with three clients, each communicating via a satellite simulator (SATSIM) that can control bandwidth and latency.� The interaction between the clients collaborating on various planning activities is described in detail in conjunction with the description of the scenarios that were executed.

 

Figure 3:������ Testbed Architecture

 

1.                  Hardware Configuration

The hardware that was used was that which is required as stated in the IWS technical Specifications Manual.� Current users of IWS also use this hardware configuration.� The hardware configuration and installed operating systems for all systems (server and client) used during the performance tests are documented in Table 1.


Table 1:�������� Hardware Configuration

 

Tests were conducted with the SX/12 Data Channel Satellite Simulator (SATSIM), using the EIA-530 Interface.� The maximum data rate is 10 megabits/second but tests were conducted with a maximum data rate of 256 kbps kilobits/second for the most part.� Time delay of data transfer was configured from 10ms up to 1000ms.� A Bit Error Rate (BER) of 1E-6 was selected.  Test case runs did vary the BER since this would add another variable to the bandwidth and latency variation, which was the primary focus of this testing.


 

2.                  Software Configuration

Software installed on the servers included Oracle8i 8.1.7, Jakarta-Tomcat 3.2.2, Oracle Internet File System 1.0.8.1.0, AsPerl, iPlanet Server Products 5.0, IWS2512 2.5.1.2, IWSSP251 Service Pack 1, LiveUpdate 1.7 (Symantec Corporation), Microsoft Baseline Analyzer, Norton Antivirus Corporate Edition, PlaceWare Media Plug-In, PlaceWare Server on port 8087, Windows 2000 Hotfix, Windows 2000 Security Rollup Package, Windows 2000 Service Pack2, and WinZip.

Software installed on each client station included Adobe Reader 6.0, InfoWorkSpace 2.5.1.2, Java 2 Runtime Environment Standard Edition v1.3.1_07, LiveUpdate 1.7 (Symantec Corporation), Microsoft Baseline Security Analyzer, Microsoft Office 2000 Professional, Norton Antivirus Corporate Edition, QuickCam, RealPlayer 7 Basic, WinZip, ATI Display Driver, InfoWorkSpace 2.5.1.2 Browser Plugin, InfoWorkSpace Webcam, Internet Explorer 5.01, PlaceWare Add-In for PowerPoint version 7, PlaceWare Media Plug-In, and PlaceWare Snapshot Plug-In.

The WAN Analyzer software operates on a Linux workstation (a two-Ethernet port Rackable box was used) and permits the monitoring of every packet that passes either Ethernet port (called interface 0 and 1 in the output of crl_delay).  The crl_delay records the one-way latency of each packet between the two Ethernet ports (or the East and West hub in our RLBTS ShadowLab) as well as the latency or "TCP Round Trip Time (TCPRTT) for all TCP packets, and outputs that in real-time or nearly so.  At the end of each crl_delay run, all relevant numbers (e.g., connection start time, connection duration, number of packets during connection) and derived statistics of latencies are tabulated for each open TCP connection during the run.  This tabulation can be referred to as crl _delay "End Of Run Statistics (EORS)", from which many useful conclusions about the functioning of applications over the WAN can be deduced.  Packets for other protocols such as UDP (protocol 17) are also recorded.

 


E.                 test cases

Test cases were developed for chat, file transfer, whiteboard, audio, video, start-up, feature sustainment, resiliency, cross server login time, and combined applications (i.e. testing of multiple features simultaneously).� The test cases focused on collecting data regarding performance of certain applications under certain conditions including emulation of three WAN environments representing high (256/128bps), medium (64 kbps), and low (9.6 kbps) bandwidth.� These were chosen to model real-world conditions aboard ships, where at any given time each one of these is possible reality.� In addition, each test included the introduction of time delay (TD) in amounts from 10ms to 1000ms in some cases.

 

F.                 scenarios

In accordance with step seven of the test methodology, various scenarios were identified for use in conjunction with the test cases. The scenarios exercised the system features or capabilities in a manner consistent with the tool's operational use.� Table 2 describes the scenarios.�

 

Scenario

Clients Involved

Capability

Scenario1 - Point to Point�

Client1/IWS1 talks to remote Client3/IWS2

Chat, Audio

Scenario2 - Point to Multi-Point

Client1/IWS1 broadcasts to remote Client3&4/IWS2

Chat, Audio

Scenario3� - Multipoint to multipoint

 

Clients1,2&5/IWS1 and remote Clients3&4/IWS2 all talk with one another�

Chat, Audio

Scenario4 - Cross-server� login

Remote Client3 on IWS2 logs into IWS1 and joins a collaboration session in sidebar room

Chat, file transfer, whiteboard, video, audio, resiliency, application sustainment

 

Table 2:�������� Test Case Scenarios


 

G.                test execution and performance results

1.                  Startup

Startup represents the product bandwidth usage during initial product startup.� During startup, IWS downloads and installs data from the server. The client loads active user, room, bulletin board, and chat data, and also downloads several Java applets (e.g. Clock Service, text chat service, logging service, Object ID Service, User preferences Service, Room toolset administration service, Places Navigator Service, Geospace Admin Service, and Quality of Service).� The number of bits transmitted from the client to the server, over the 86 second time period it took for the client to complete the startup process, was approximately 480,024 bits at a bandwidth of 128kbps and latency of 10ms.� The total number of bits transmitted from the server to the client over the 86 second time period was 868,216.� Thus, an average of 15,677 bits per second was exchanged between client and server over the 86 second initial startup period.� This information is illustrated in Figure 4.�

The overhead of the necessity of loading all of this information on initial startup is the result of the use of a JAVA plugin (JAVA Runtime Environment 1.3.1) to launch the IWS browser based client application.� Once the initial startup has been performed this data is present on the client.

Figure 4:������ Startup

 

2.                  Keep-Alive (Maintenance of Bandwidth)

Application sustainment or �Keep-alive� bandwidth is product bandwidth usage during periods of no end-user activity.� The �keep-alive� values (i.e. client send, server send) averaged traffic from client to server and from server to client (independent of port number) over a three-minute period.� This testing was performed after two clients had successfully logged into a room on the remote server.�� Test operators stopped all keyboard and user-interface inputs and the network traffic between the local and remote servers was recorded for approximately three minutes. The �keep-alive� tests were executed for file cabinet uploads, whiteboard use, and video use, to determine if any communication overhead was caused by the feature having been in use prior to the period of inactivity.� Test results are provided in Figure 5.�

 

Figure 5:������ Keep Alive Bits per Second

 

Data analysis revealed that the four client processes provide status updates by opening an active port, providing their data, and then closing that connection.

An additional test was performed to measure how much network traffic passed through the testbed wide-area network when two local and two remote clients where left idle in SideBarRoom1 on server IWS1 for 16 hours or 57600 seconds.� Results showed that a total of 3,621,831 bytes passed through the WAN, which is an average of 62.88 bytes/second or 503.03 bits/second.� This provides an estimate of the minimum bandwidth required to maintain a session between two servers with four idle clients.�

 

3.                  Communications Resiliency

Communications resiliency was tested after a collaboration session had been successfully initiated between test platforms.� Test operators stopped all keyboard and user-interface input between the local and remote servers, and then the SATSIM was brought down for approximately three minutes and then brought back up.� The SATSIM was brought down by setting the bit Error Rate (BER) to 100%.

In general, when no applications had been started, during SATSIM down time about 612 kbits were transmitted across the WAN during 4 seconds while attempting to restore the connection.� Federation was eventually restored; however, the remote client could not get into IWS1 server sidebar rooms.

When testing Chat Resiliency, about 75 seconds after the connection was restored, approximately 9 kbits were transferred across the WAN, but the remote client connection was rejected.

During the remote file transfer test, the local Client1 saw that the remote Client3 had left the sidebar room.� The server IWS1 indicated that the channel was closed.�� The Client3 file transfer progress bar stopped and a couple of seconds later a Java exception was thrown.� About 60 seconds after the connection was restored, approximately 8 kbits transferred across the WAN.� About 115 seconds after the connection was restored, approximately 612 kbits transferred across WAN during 4.5 seconds while the federation was restored but the remote client application was locked up.

When testing Audio Resiliency, the local Client1 was notified that the remote Client3 had left the chat room.� The local server IWS1 reported that the channel was closed.�� About 3 minutes and 40 seconds after the connection was restored, about 612 kbits crossed the WAN over 4 seconds while the federation was restored.� When federation was restored, Client3 was unable to join IWS1 sidebar rooms.�

During whiteboard testing, when the SATSIM was blocked, Client3 received a message box saying that connection has been lost; when federation was restored, Client3 was unable to join to IWS1 sidebar rooms. Logging off Client3 and then logging back in was the only way Client3 was able to get� the objects� restored.�� About� 5 minutes� and 20 seconds� after� the� connection� was

restored, approximately 612 kbits transferred across the WAN during 4 seconds while the federation was restored, but the remote client application was locked up.

A general observation about this group of tests was that when the servers federated after the WAN connection was restored; it required about 610 kbits of data to cross the WAN, and at 9.6 kbps that would take over 1 minute.� This can be seen in Figure 6.

During SATSIM down time, remote clients (i.e. those on IWS2 server) did not detect they had lost communications.� The local server (IWS1 server) started to indicate the loss of remote participants approximately one minute after a communications outage. This product does have a Quality of Service (QoS) indicator display that was informative for the high bandwidth scenario, but the display indicated the same QoS in low bandwidth scenario for both SATSIM communications up and down.� Upon re-establishment of communications, the remote clients received a message from the server indicating their attendance in the meeting had been terminated and they were not automatically reconnected to the meeting.� Overall, the applications ceased working for the remote client when a failure in communications occurred.

File transfers between remote clients and server that experienced a communications outage during transfer necessitated resending the entire file upon resumption of communications.� Note that if remote clients attempted to send data during the SATSIM down time, they would receive an error message indicating they had lost their meeting connection.

Resiliency bandwidth on re-federation test results is provided in Figure 6.

 

Figure 6:������ Resiliency Bits loaded upon re-Federation

 

4.                  Chat

There were four scenarios used in the Chat application testing as described in Table 3.� Each scenario was executed over a 90 second test window.� Clients from the local server and remote server entered and exited different Sidebar Rooms and exchanged broadcast messages with IWS clients in that room, and exchanged Private messages with IWS clients in other Sidebar rooms.�

Chat testing was conducted at high, medium, and low bandwidth and latency combinations.� The average bits per second for the data runs are shown in Table 3.� This was the total bits transferred over the WAN divided by 90.� Chat data flow was examined by comparing the difference of having clients logged in at a local server sidebar room verses a remote server sidebar room.� When the remote client was logged into the sidebar room, the server to which the room belongs communicated directly to the remote client, not via the remote clients� server.� This is apparently a result of federation of servers.


 

Bandwidth/Delay (kbps/ms)[1]

256/10

64/10

9.6/10

Scenario 1

 

 

 

Room Chat

1192[2]

908

483

Private Chat

1294

1113

 

Room & Private Chat

1159

1873

 

Scenario 2

 

 

 

Room Chat

2017

2161

1544

Private Chat

2201

 

 

Room & Private Chat

2549

 

 

Scenario 3

 

 

 

Room Chat

2698

4788

2838

Private Chat

4506

 

 

Room & Private Chat

5941

 

 

Scenario 4

 

 

 

Room Chat

10532

10922

7429

Private Chat

10678

 

 

Room & Private Chat

11379

 

 

Active User/Send note

10839

 

 

 

Table 3:�������� Chat Test Cases and Results

 

Additional chat testing was conducted to determine the bandwidth level where degradation of quality occurred.� Bandwidth testing began at 9.6 kbps/10ms and also checked 4.8/10, 2.4/10 and 1.2 kbps in tests.� At 1.2 kbps, operators observed significant delays with the chat from one operator displaying at a destination client about 30 seconds to 2.5 minutes after the originator sent the message.� Longer 2.5 min delays were observed in scenario 3 when 5 operators (and therefore more traffic) were active. Chat test results are provided in Figure 7.

Figure 7:������ Chat Test Results

 

In summary, the data showed that a basic one-client to one-client room chat (scenario 1) required approximately 1 kbps across the WAN.� A one to two client chat (scenario 2) required about double the bandwidth at 2 kbps.� For scenario 3, 5 users required about 4.5 to 5 kbps.� The most intensive usage for a single client was when a client logs in remotely to another server sidebar room (scenario 4) and this required about 10 kbps for 1 client to 1 client chat.� No significant difference between the types of chat was observed (i.e., room, private or active user, where active user was measured when both clients were logged into the same server).� The apparent lower bits per second for a 9.6k communications path is puzzling, other than it was verified that IWS does not perform compression and it may be anomalous.� The results support observations that chat began to see significant delays when the bandwidth was 1.2 kbps for scenario 1 and when the bandwidth was 9.6 kbps for scenario 4.� Additional delay time added a proportional delay to observed response time for this application.

 

5.                  File Transfer

File transfer to both the File Cabinet and Briefcase was performed using test-procedure instructions to upload or download a file from or to any room�s File Cabinet on the remote server.� In addition, Scenario 4 was used such that the remote client file transfer across the WAN was subject to the bandwidth and delay constraints listed in Table 4.� File sizes of 100 kb, 500 kb and 1 mb were used during test execution.� Test cases, comprised of file size transferred at a given bandwidth/delay, and the results are shown in Table 4.

 

b/w&delay

128/

250

64/

250

9.6/

250

128/

500

64/

500

9.6/

500

128/

1000

64/

1000

9.6/

1000

64/

10

9.6/

10

File Size

 

 

 

 

 

 

 

 

 

 

 

 

Average bps[3]

 

 

 

 

 

 

 

 

 

 

100k

51820

41,

105

7982

39,077

33911

8358

21884

21759

7592

55892

8967

500k

69714

58577

 

69,245

45437

 

30173

25134

 

61234

 

1000k

88906

59329

 

61,779

48499

 

32031

33822

 

64541

 

 

Net bits[4]

 

 

 

 

 

 

 

 

 

 

100k

984,580

1,027,625

993,

703

951,546

983,

419

1027,

904

962,

896

1022,

664

1009,

672

950,

168

1,049,

184

500k

4,601,

124

4510,

429

 

4500,

202

4498,

263

 

4465,

604

4549,

344

 

4531,

384

 

1000k

8,801,

694

8840,

021

 

8832,

114

8778,

319

 

8776,

494

8861,

376

 

8906,

688

 

 

Time of transfer[5]

 

 

 

 

 

 

 

 

 

 

100k

19

25

125

25

29

123

44

47

133

17

117

500k

66

77

 

66

99

 

148

181

 

74

 

1000k

96

149

 

143

181

 

274

262

 

138

 

 

Table 4:�������� File Transfer Test Cases and Results

 

A summary of file transfer results for File Cabinet is provided in Figure 8 where the bandwidth is held constant and delay varies, and Figure 9 where the delay is held constant and the bandwidth is varied.

Figure 8:������ File Transfer Test Results

 

In summary, at a delay of 10ms (low) the data transfer rate approaches the limits of the communications path.� Higher delays in the 500 to 1000 ms range have a significant impact on throughput.� The larger the file, the more efficient the application appears at transfer, but this did not make up for overall time to transfer a large file.

Figure 9:������ File Transfer Test Results with Varying Bandwidth

 

6.                  Whiteboard

Test execution began at the bandwidth/latency rates of 128/10 ms. Scenario 4 was performed with the test case where all clients imported images.� Operator observation revealed that after the remote client6 imported an image and it took 2 seconds to render locally, it took 4 seconds for the remote client to see it.��

The same test was conducted at the bandwidth/latency rates of 64/10ms.� From a user perspective, it appeared that after the local Client2 imported the image, it took 4 seconds for the remote Client3 to see it.

The test was then conducted at the bandwidth/latency rates of 9.6/10ms, and with a slight variation, the addition of another remote client.� Again, from a user perspective, it appeared that after the remote Client6 imported the image, it took 4 seconds for it to appear on that client, 30 seconds for the other remote Client3 to see it, and only 20 seconds for the two local clients 1 and 2 to see it.

Whiteboard testing continued with Scenario 4 at the bandwidth/latency rates of 129/10 ms.� This test case involved drawing objects geometric shapes and text on the whiteboard.� Results showed that for the local clients it took less that one second to see the objects, and for the remote client it took approximately two to three seconds.

Tests conducted at 9.6/10 ms showed a range of results, depending on the object that was drawn or imported to the whiteboard.� For example, when all clients simultaneously drew and imported small images such as circles, rectangles, and squares to the whiteboard it took five seconds for the remote client to see the objects on its� whiteboard.� Similarly, when a medium size circle was drawn by a remote client, it took 7 seconds for the client on the local server to see it.� This would seem to substantiate the author�s hypothesis that the bigger the object, the longer it takes to render on the whiteboard. Repeated tests did confirm this, and larger objects took 9-11 seconds to render.� During whiteboard testing, higher sustained average transfers of 10 to 13 kbps were observed while using larger objects.� A summary of whiteboard test results is provided in Figure 10.�

 

Figure 10:���� Whiteboard Test Results

Keep Chat Bits per Second

 

In summary, under optimal bandwidth constraints of 128/10 ms, when the drawing action occurred from the remote client, a 3 second delay was noticed before rendering was complete.� However, when drawing was initiated by the local client, the rendering was instantaneous.

 

7.                  Video

IWS uses WebCam, a third party application, for its video capability.�� Video testing was conducted at optimal conditions (e.g. 256/128/9.6 kbps bandwidth and 10 ms delay).� The test scenario 1 involved a local client using the Webcam application to display video of a remote client logged into the same sidebar room session.

During IWS video testing, quality was poor and there was a 3 second transmission delay.� The video that was transmitted appeared to be still pictures transmitted every 3 seconds or so.� The picture appeared to update every couple of seconds even at higher bandwidth of 2Mbps.� Video of remote client at the local client stopped updating during one test run, so the Webcam application was stopped and restarted to restore the video application.� At 9.6 kbps, remote video continued to update but at a slower rate of 7-10 seconds. There was difficulty in setting up testing, and the solution was to reboot the servers until we got the video application to work.� A summary of the video test results is provided in Figure 11.

Figure 11:���� Video Test Results

 

In summary, for the most basic scenario 1 where one client talks to another client across the WAN the approximate average bandwidth was 28 kbps.� The picture-update rate corresponds with data observations that the video data was transmitted approximately every 2-3 seconds across the WAN.�

 

8.                  Audio

The IWS Audio or Voice-over-IP capability was tested for each scenario listed in Table 2 at the bandwidth/latency rates of 128/10, 64/10, 32/10, and 9.6/10 kbps/ms.� The times were the amount of time it would take for all clients to receive the full one-way voice stream.

Various problems began to show up, such as choppy and slightly delayed voice, and the quality seemed to degrade over time. The test cases and their results are described in this section.

During the first test case, the local client 1 used the private chat/talk function to transmit a single 10 sec voice count from 1 to 10 followed by "over", to which the remote client 3 responded with the same count followed by "over".� Test scenario 2 involved a local client talking to 2 remote clients.� Test scenario 3 is defined as client1 talking to client 3 and client 2 talking to client 4 (two separate remote conversations).� When audio deterioration was observed, a paragraph was read instead of the count from 1 to 10 followed by "over"; this was to determine if the audio was recognizable since the counting was easier for the receiver to understand given the receiver knew what to expect.� Private and broadcast audio were tested to see if a different amount of bandwidth were required for these functions. Test cases, at a given bandwidth/delay, and the results are shown in Table 5.

 

 

b/w&delay

128/10

64/10

32/10

9.6/10

128/250

64/250

32/250

9.6/250

Scenario 1, Average bits transferred

18060

18787

�19088

8829

18704

19051

17117

8694

Net bits transferred (bits)

433448

319376

362664

335504

392784

361960

359456

330360

Test Duration (seconds)

24

17

19

38

21

19

21

38

b/w&delay

128/500

64/500

32/500

9.6/500

128/1000

64/1000

32/1000

9.6/1000

Scenario 1, Average bits transferred

18424

18530

19087

8944

18737

17612

17733

8183

Net bits transferred (bits)

386904

370592

362656

357760

449680

352232

354656

343672

Test Duration (seconds)

21

20

19

40

24

20

20

42

b/w&delay

128/10

64/10

32/10

9.6/10

 

 

 

 

Scenario 2

51282

 

34264

 

 

 

 

 

Net bits transferred (bits)

512816

 

513952

 

 

 

 

 

Test Duration (seconds)

10

 

15

 

 

 

 

 

Scenario 3

44570

43644

32443

8594

 

 

 

 

Net bits transferred (bits)

802256

785592

811080

429712

 

 

 

 

Test Duration (seconds)

19

18

25

50

 

 

 

 

Scenario 2

39866

 

32926

 

 

 

 

 

Net bits transferred (bits)

398664

 

691440

 

 

 

 

 

Test Duration (seconds)

10

 

21

 

 

 

 

 

b/w&delay

at 19.2/10

at 40/10

 

 

 

 

 

 

Scenario 3

21904

39706

 

 

 

 

 

 

Net bits transferred (bits)

635224

794128

 

 

 

 

 

 

Test Duration (seconds)

29

20

 

 

 

 

 

 

 

Table 5:�������� Audio Test Cases and Results

 

A summary of the audio test results is provided in Figure 13.� The values in Figure 12 represent the average bits per second (bps) transferred across the WAN during the period of the audio test exercise.�

Figure 12:���� Audio Bits Test Results

 

In summary, for scenario 1, where the local client communicated to the remote client across the WAN, the bandwidth was 18-19 kbps.� This corresponds with observations that at 9.6 kbps the audio was not functioning at all.� Increasing the delay to 1000ms seemed to slow the net transfer by approximately 1 kbps.� For scenario 2, where one the local client was transmitting voice to two remote clients across the WAN, the approximate bandwidth measured was between 33-51 kbps.� This corresponds with observations at 32 kbps that the audio quality had deteriorated, and was choppy and reverberated.� For scenario 3, where 2 separate audio sessions were established across the WAN, the approximate bandwidth measured was 39-45 kbps.� This corresponds with observations at 40 kbps, when gaps in the speech seemed to degrade significantly, over the time of the audio exercise.� IWS didn�t seem to have any strategy that it used to deliver audio when it was having trouble keeping up.� The transmissions just become impossible to understand.�

 

 

9.                  Remote Room Login

Remote-room login tests were to determine how much time it would take for a client to join a meeting in a sidebar room on the server which was not its home server.�� Upon joining the workspace, all data was pushed to the new member of the workspace (time varied between 2-7 minutes depending on bandwidth).� If more clients had to join at the same time, then the process for downloading the workspace would take much longer. IWS has a Quality of Service (QoS) indicator display that indicated that the QoS was between 1-14 when room login was attempted by the remote client.

The IWS client software operated on the client computer as a local application.� In the scenario where a remote client logged into a room located on the local server, data was transferred between the local server and the remote client.� When delay was increased, the remote client could not join the local server sidebar room 1 for five minutes at 256 kbps bandwidth and 2000ms delay (4000ms round trip). This may be due to the fact that previous chat conversations were being uploaded to sidebar room 1.� There is no mechanism feature for clearing chat from a room; the Placeware server must be stopped and re-started.�

Other operator delays with respect to logging into a remote sidebar room at low bandwidth are noted as follows.� At 9.6 kbps/10 one observed a 1min45s time for log into remote server sidebar room.� At 4.8 kbps/10 one observed a 4 minute login time.� Login and navigation time to the remote server sidebar room 2 was 4 and a half minutes at 9.6 kbps/1000ms to log in as opposed to 1 minute 45 when the delay was 10ms.

 

10.             Combined Applications

For combined application, which involves testing multiple features simultaneously, scenario one was executed for a duration of one minute.� The features combined were Chat and file upload, for one pair of clients across the WAN, and Whiteboard and Audio for the other pair of clients across the WAN.� Chat input consisted of one operator transmitting a string of chat text every 10 seconds, while another provided voice input, and another added an image in JPEG format to the whiteboard.� For the file upload feature, while the remote client was uploading a 100K file, all other applications were virtually stopped. After the upload was completed, chat whiteboard and audio continued. Thus, it would seem that the file upload feature has priority over the others.�� A summary of the combined application test results is provided in Figure 13.

Figure 13:���� Combined Applications Test Results

 

Results for the 9.6kbps/10 case show that when bandwidth is limited, some applications have priority over other ones, because during the file upload other applications appeared to stop.

 

11.             Shared View

����������� The Shared View test scenario consists of one client presenting a PowerPoint presentation with voice audio from a room at one server, while one to three clients login both at another server and the presenter�s server to view and listen to the presentation.� The shared view exercise began with a 1 mb PowerPoint file, a bandwidth setting of 128 kbps, and a delay of 500 ms. The duration of the presentation was three minutes.� Slides were advanced about every 20 seconds and narration occurred for most of the time with interruptions of about 3 seconds to change slides.� The lower end of the bandwidth delay testing was conducted at 64 kbps bandwidth and 800 seconds delay.� At this setting it took about 9-10 seconds for the slide to advance at the clients viewing the presentation.� With as few as two clients viewing the presentation the audio was unsatisfactory.� Typically, it would cut out while the slide was updating.� There was virtually no delay with 1 client at a bandwidth of 128 kbps, but a 4 second delay when two clients were listening.� If the viewing client remotely logged into the presenter's server this appeared a little less efficient, and audio deterioration was observed with as little as one client viewer.� At this point when deterioration was observed, the communications path was essentially at its limit.� At the 128/500 bandwidth/delay setting 3 clients could view a presentation, and voice quality was acceptable with one or two breaks; this appeared to be the limit for audience members at this bandwidth/delay setting before significant deterioration would occur. A summary of the combined application test results is provided in Figure14.

Figure 14:���� Shared Viewing Test Results

 

H.                Assessment of IWS Performance

Table 6 provides an evaluation of how well each IWS feature supported the technological constraints of limited bandwidth and network congestion or latency experienced under real world conditions are supported.� Results of test run to discover performance thresholds, including robustness, e.g., ability to view an image with a certain amount of discrimination, and acceptability of audio or video, are also summarized in Table 6.� A rating of pass indicates that the IWS feature supported the performance constraints.

 

 

Test Name

Test Observations

Rating

Communications Resiliency

4 seconds to re-federate servers and remote client connection rejected

Fail

Chat Resiliency

75 seconds to re-federate servers and remote client connection rejected

Fail

Remote File Transfer Resiliency

4.5 seconds to re-federate servers and remote client was locked up

Fail

Audio Resiliency

3 minutes and 40 seconds to re-federate servers and remote client unable to join sidebar rooms

Fail

Whiteboard Resiliency

5 minutes and 20 seconds to re-federate servers and remote client unable to join sidebar rooms.� Objects restored when remote client log out and log in performed

Fail

Chat

No problems until at bandwidth of 1.2 kbps chat began to see significant delays for scenario 1 and when the bandwidth was 9.6 kbps for scenario 4

Pass

File Transfer (sizes 100 kb, 500 kb and 1 mb)

At a delay of 10ms (low) the data transfer rate approaches the limits of the communications path.�

Pass

Whiteboard

At bandwidth of 128/10 ms, when the drawing action occurred from the remote client, there was a 3 second delay before rendering was complete. When drawing was initiated by the local client, the rendering was instantaneous.

 

Pass

Video

The picture appeared to update every couple of seconds even at higher bandwidth of 2Mbps� Application had to be stopped and restarted.

Fail

Audio

Choppy and slightly delayed voice, and the quality seemed to degrade over time. At 32 kbps that the audio quality had deteriorated, and was choppy and reverberated. At 9.6 kbps the audio was not functioning at all

 

Fail

Remote Room Login

At 256 kbps bandwidth and 2000ms delay (4000ms round trip), remote client could not join the local server sidebar room 1 for five minutes.� Login and navigation time to the remote server sidebar room 2 was 4 and a half minutes at 9.6 kbps/1000ms to log in as opposed to 1 minute 45 when the delay was 10ms.

 

Fail

 

Table 6:�������� Summary of� IWS Performance Test Results


 

 

 

 

 

 

 

 

 

 

THIS PAGE INTENTIONALLY LEFT BLANK

 

VI.             conclusion

In conclusion, collaborative tools can facilitate communication among warfighters distributed throughout the world.� We have a responsibility to ensure that the tools are reliable and not to be a hindrance to those who use them; that means they should not be intrusive in daily operations and support mission requirements.� The current desire to use software systems developed for business purposes in warfare settings may encounter obstacles. It is imperative that we understand the promises and the limitations of collaborative tools and their impact on warfighter effectiveness in a heterogeneous command center environment. This thesis has presented a methodology for assessment of performance reliability for collaborative tools.

It is the author�s position that it is critical that a standard test methodology be used when evaluating the performance of a collaborative tool.� This can provide a consistent assessment and useful insights for comparison studies where it is critical that the correct tool be selected to meet stated mission requirements.� This may also have implications for the evaluation of collaborative tools according to some other aspect of reliability such as usability. This research shows the need for more evaluation of collaborative tools used in military operations and shows how a test methodology can be systematically applied.�

There is clearly a need for more research to be conducted on performance of collaborative software in military operations.� There are also many other performance related criteria that could be used in the selection of a collaborative tool.� This thesis has explored collaborative performance in terms of bandwidth and delay.� Future work many include application of the test methodology to include an evaluation based on usability, ease of setup, and number of software inconsistencies encountered.� Each tool would receive a score in each of the areas and then a recommendation could be made.�


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

THIS PAGE INTENTIONALLY LEFT BLANK

 

bibliography

[BR00] Binder, Robert, �Testing Object-Oriented Systems Models, Patterns, and Tools�, Addison-Wesley, 2000.

 

�[BS03] Bouthillier, F., Shearer, K..,� �Assessing Collaborative Tools from an Information-Processing Perspective: Identification of Value-Added Processes�, Proceedings of the Twelfth IEEE International Workshops on Enabling Technologies: Infrastructure for Collaborative Enterprises (WETICE�03)

 

[CC03] Choo, C.W., �Information Management for the Intelligent Organization: The Art of Scanning the Environment�,. 4th ed. Information Today, Medford, NJ, 2003.

 

[DCTS04]� Defense Collaboration Tool Suite web site, http://www.disa.mil/main/prodsol/dcts.html , September 12, 2004.

 

�[ERF02 ] Elish, M., Rine, D., Foreman, J., �Evaluating Collaborative Software in Supporting Organizational Learning with Bayesian Networks�, SAC 2002, Madrid, Spain

 

[EZ04] Ezenia Inc web site, http://www.ezenia.com/products-iws.php, September 12, 2004.

 

�[GG00] Gutwin, C. and Greenberg, S. �The Mechanics of Collaboration: Developing Low Cost Usability Evaluation for Shared Workplaces�. Proceedings of 9th IEEE WETICE 2000 Workshop on Enabling, 2000.

 

[GRV04] Groove Networks web site, http://www.groove.net/home/

http://www.wordiq.com/definition/Groove_%28software%29, September 12, 2004.

 

�[HRHH97] Hammer, T., Rosenberg, L., Huffman, L., Hyatt, L., �Measuring Requirements Testing�,� Proc. International Conference on Software Engineering (Boston MA, May 1997) IEEE Computer Society Press.

 

[IEEE87]� IEEE Standard 982.2-1987 Guide for the Use of Standard Dictionary of Measures to Produce Reliable Software.

 

[IEEE90] IEEE Standard 610.12-1990 Glossary of Software Engineering Terminology

 

�[IWS04] Information Work Space web site, www.infoworkspace.com, September 12, 2004.

 

����� [JDPO02]� �Battle Rhythm: The timing and scheduled presentation of situation reports, briefings, formal collaborative sessions, and other required actions during planning and execution : Deployment Planning Using Collaboration,� A Handbook Supporting Collaborative Planning.� JFCOM, JDPO, 2002

�[JWOD96] Johnson, B., Weaver, G., Olson, M., Dunham, R., �Using a Computer-Based Tool to Support Collaboration: A Field Experiment�, MCC Conference on Computer Support, Austin, TX, 1986.

 

�[KC02] Crow, K., �Collaboration�, DARM Associates, 2002.

 

[KK88] Kraemer, K., King, J.L., �Computer-Based Systems for Cooperative Work Group Decision Making�, ACM Computing Surveys, Vol. 20, No. 2, June 1988

 

[PG03]� Pinelle, D., Gutwin, C. Greenberg, S. �Task Analysis for Groupware Usability Evaluation: Modelling Shared-Workspace Tasks with the Mechanics of Collaboration�. ACM Transactions on Computer Human Interaction (TOCHI), 2003.

 

[SMGG01] Steves, M., Morse, E., Gutwin, C., Greenberg, S., �A Comparison of Usage Evaluation and Inspection Methods for Assessing Groupware Usability�, Proceedings of the 2001 International ACM SIGGROUP Conference on Supporting Group Work, September 2001�

 

�[SN03] Shachtman, Noah, �Military Faces Bandwidth Crunch�, Wired News, Jan 2003.

 

�[WEB04] http://www.webbe.com/, September 12, 2004.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

INITIAL DISTRIBUTION LIST

1.                  Defense Technical Information Center

Ft. Belvoir, Virginia

 

2.                  Dudley Knox Library

Naval Postgraduate School

Monterey, California

 

3.                  Dr. Man-Tak Shing

Computer Science Department

Naval Postgraduate School

Monterey, California

 

4.                  Dr, Neil Rowe

Computer Science Department

Naval Postgraduate School

Monterey, California

 

5.                  Dr. Peter Denning

Computer Science Department

Naval Postgraduate School

Monterey, California

 

6.                  Brenda Joy Powers

USEUCOM

Stuttgart, Germany

 

7.                  Dr. LorRaine Duffy

SPAWARSYSCEN

San Diego, California

 

8.                  LtCol George Whitbeck

USEUCOM

Stuttgart, Germany

 

 

 

 

 

 

 

 

 



[1]�bandwidth in kilobits per second (kbps) and delay in milliseconds (ms)

[2]�These values represent the average bits per second (bps) transferred across the WAN over the period (90 seconds) of the chat exercise where operators sent a text string every 10 seconds for a total of 10 transmissions.

 

[3]These values represent the average bits per second (bps) transferred across the WAN during the period of the file transfer exercise.

[4]�These values show net bits transferred and provide an indication of the accuracy and consistency of the measurements since they should all be about the same across a row.

[5]�These values show transfer time in seconds for the given file size, b/w and delay condition.