an approach to assessing readiness based logistic support policies

8
ABSTRACT The third phase of the Readiness Based Sparing (RBS) process is called life cycle maintenance. The key to this phase is answering the questions, “How are we doing?” and “How can we do better?” Since RBS sparing loads have been in the fleet for some time, it must now be determined whether the goals and predictions made at the outset are being met. Assessments are often out-of-date, or are not available for the specific equipment under study They may not use the same criteria for failures or replacements that are needed for sparing determinations and do not include a key measurement for RBS use, gross (supply) effectiveness. Also, from a program management point-of-view, these studies are time-consuming and expensive. This paper deals with the Naval Sea Logistics Center’s (NavSea- LogCen) efforts to create an assessment tool that is automated, time13 user- friend13 for the purposes of computing logistics indicators such as mean time between failures (MTBF), mean time to repair (MTTR), and mean requisition response time (MRRT) as well as achieved gross effectiveness. In turn, these indicators will be used by the NavSea TIGER simulation model to compute operational availability (Ao). An Approach to Assessing Readiness Based Logistic Support Policies Introduction he first stated purpose of OpNavInst 3000.12 [l] is to establish the quantity called operational availability or Ao as the primary measure for material readiness. This instruction also states “readiness thresholds and subsequent measures of readiness of systems in- troduced to the fleet shall be based solely on material readiness.” Clearly, Ao is the parameter to be used as an assessment index, or measure, for deter- mining if weapon systems are able to accommodate their intended mission from a material point of view. Therefore, when we are required to optimize our re- sources when supporting a weapon system, it is natural to use this defined quantity, Ao, as the objective. And if we could express the readiness objective, Ao, as an explicit function of support costs, we would accomplish one of the goals of acquisition reform. It is necessary to structure an analysis technique which will treat cost as an independent variable as mentioned in the latest ac- quisition doctrine [a]. This is precisely the technique which NavSup Instruction 4442.14A defines: a technique which relates readiness explicitly to retail supply support costs of corrective maintenance actions of weapons systems [3]. In this instruction there is a feedback loop defined as part of the readiness based spar- ing process. It is this feedback loop, capturing empirical data from the fleet, that wdl be explored later in this paper. The readiness based sparing (RBS) process has been acquired by the Navy just as a weapon system would be acquired. From the outset there have been milestones and phases similar to a typical hardware acquisition [4]. Today, we are well into the productioddeployment phase of the RBS process [51. As re- quired, it is time to measure not only the results of RBS implementation on weapon systems, but to assess the RBS process itself, as well. ROLE IN ASSESSMENT There has been some concern within the Navy regarding various activities being involved with assessing readiness. We would like to shed some light on the role that NavSeaLogCen has in this area. Early on in the acquisition of the RBS process it was recognized that NavSeaLogCen was to fill a crucial role. In effect, NavSeaLogCen would act as the “in-service engineering activity” (ISEA) providing life cycle support for the process to be used by customers, namely, other Navy logisticians. As part of life cycle support, it is important to have the capability of measuring (or assessing) that process and its results in order for improvements to be made. A paper was written discussing this point [6]. Fol- lowing the paper, a notice came from the Chief of Naval Operations (CNO) directing the Naval Sea Systems Command (NavSea) to expand the mission of NavSeaLogCen “to serve as the NavSea technical agent for developing, main- taining, and assessing life cycle logistics support policies, procedures and data systems and perform other technical support functions” [7]. Following this di- rection, NavSea issued an instruction, where it is stated as part of its mission, NavSeaLogCen shall “assess the retail and wholesale material support provided NAVAL ENGINEERS JOURNAL May 1997 195

Upload: j-dunst

Post on 20-Jul-2016

213 views

Category:

Documents


1 download

TRANSCRIPT

ABSTRACT The third phase of the Readiness Based Sparing (RBS) process is called life cycle maintenance. The key to this phase is answering the questions, “How are we doing?” and “How can we do better?” Since RBS sparing loads have been in the fleet for some time, it must now be determined whether the goals and predictions made at the outset are being met. Assessments are often out-of-date, or are not available for the specific equipment under study They may not use the same criteria for failures or replacements that are needed for sparing determinations and do not include a key measurement for RBS use, gross (supply) effectiveness. Also, from a program management point-of-view, these studies are time-consuming and expensive. This paper deals with the Naval Sea Logistics Center’s (NavSea- LogCen) efforts to create an assessment tool that is automated, time13 user- friend13 for the purposes of computing logistics indicators such as mean time between failures (MTBF), mean time to repair (MTTR), and mean requisition response time (MRRT) as well as achieved gross effectiveness. In turn, these indicators will be used by the NavSea TIGER simulation model to compute operational availability (Ao).

An Approach to Assessing Readiness Based Logistic Support Policies

Introduction

he first stated purpose of OpNavInst 3000.12 [l] is to establish the quantity called operational availability or Ao as the primary measure for material readiness. This instruction also states “readiness thresholds and subsequent measures of readiness of systems in-

troduced to the fleet shall be based solely on material readiness.” Clearly, Ao is the parameter to be used as an assessment index, or measure, for deter- mining if weapon systems are able to accommodate their intended mission from a material point of view. Therefore, when we are required to optimize our re- sources when supporting a weapon system, it is natural to use this defined quantity, Ao, as the objective. And if we could express the readiness objective, Ao, as an explicit function of support costs, we would accomplish one of the goals of acquisition reform. It is necessary to structure an analysis technique which will treat cost as an independent variable as mentioned in the latest ac- quisition doctrine [a]. This is precisely the technique which NavSup Instruction 4442.14A defines: a technique which relates readiness explicitly to retail supply support costs of corrective maintenance actions of weapons systems [3]. In this instruction there is a feedback loop defined as part of the readiness based spar- ing process. It is this feedback loop, capturing empirical data from the fleet, that wdl be explored later in this paper.

The readiness based sparing (RBS) process has been acquired by the Navy just as a weapon system would be acquired. From the outset there have been milestones and phases similar to a typical hardware acquisition [4]. Today, we are well into the productioddeployment phase of the RBS process [51. As re- quired, it is time to measure not only the results of RBS implementation on weapon systems, but to assess the RBS process itself, as well.

ROLE IN ASSESSMENT There has been some concern within the Navy regarding various activities being involved with assessing readiness. We would like to shed some light on the role that NavSeaLogCen has in this area. Early on in the acquisition of the RBS process it was recognized that NavSeaLogCen was to fill a crucial role. In effect, NavSeaLogCen would act as the “in-service engineering activity” (ISEA) providing life cycle support for the process to be used by customers, namely, other Navy logisticians. As part of life cycle support, it is important to have the capability of measuring (or assessing) that process and its results in order for improvements to be made. A paper was written discussing this point [6]. Fol- lowing the paper, a notice came from the Chief of Naval Operations (CNO) directing the Naval Sea Systems Command (NavSea) to expand the mission of NavSeaLogCen “to serve as the NavSea technical agent for developing, main- taining, and assessing life cycle logistics support policies, procedures and data systems and perform other technical support functions” [7]. Following this di- rection, NavSea issued an instruction, where it is stated as part of its mission, NavSeaLogCen shall “assess the retail and wholesale material support provided

NAVAL ENGINEERS JOURNAL May 1997 195

An Approach to Assessing Readiness Based Logistic Support Policies

for equipment to assure readiness goals will be achieved at minimum cost” [SI. It is imperative that we then be able to measure readiness, since the support policy of RBS is with respect to readiness, i. e., operational availability Ao.

READINESS ASSESSMENT WORKING GROUP In May 1994, the Readiness Assessment Working Group (RAWG) was formed by direction from the Readiness Based Sparing (RBS) Flag Steering Board [9]. Their task- ing was to develop a timely, consistent methodology to evaluate achieved operational availability of those systems supported by RBS. Command representatives with work- ing knowledge of past and current assessment tools and systems were sought from various activities and assem- bled to compose the workmg group. The group was co- chaired by NavSea 91 and NavSup 041.

Meetings were initially held on a monthly basis to define readiness assessment criteria, data sources, capabilities, metrics, etc. The RAWG membership received briefings on current and past readmess assessment methodologies such as the Materiel Readiness Data Base (MRDB), Ships’ Machinery Analysis and Review Technique (SMART), and System Performance and Readiness Im- provement through Technical Evaluation (SPRITE). After many discussions, it was agreed that the process must be available to a wide range of Navy personnel, with vary- ing backgrounds and capabilities, with little or no expense to the user. A two-tier process was conceptualized.

The first tier would measure readiness, maintenance, and supply logistics indicators decided upon by the RAWG. The primary function of this product would be to provide a method to quickly and roughly indicate whether a system was having a readmess problem. It would be available to

ADP = APL =

EIC =

ODBC =

SNAP = SQL =

Automated data processing Allowance Parts List: a supply and maintenance document that lists pertinent data for parts of individual equipments. Equipment Identification Code: a seven position alpha-numeric code used to identify systems and equipments in a quasi-top-down breakdown format. Open Database Connectivity: a standard application programming interface (API) developed by Microsoft. It allows a single application to access a variety of data sources for which ODBC-compliant drivers exist. The application uses Structured Query Language (SQL) as the standard data access language. Shipboard Nontactical ADP Program Structured Query Language: an English-like language used for querying, updating, and managing data in a relational database. SQL is the ANSI industry standard and is the language used and included in Oracle RDBMS.

196

anyone with access to the ships’ 3-M database. It would be applicable to any Navy system and would provide the capability to assess the logistics indicators at various lev- els, e.g., all Navy, fleet ship class, ship, system, or equip- ment. It would also provide easy-to-read data tables and graphics for each indicator.

The second tier would be a more robust tool capable of determining an estimate of a system’s Ao. This tool would provide a consistent method of evaluating RBS determined spares loads and the RBS process itself; that is, it would be a quality assurance tool for RBS. The intent was to make this tool readily accessible to in-service engineering agents (ISEAs), authorized RBS analysts, and headquar- ters program managers.

The RAWG adjourned in February 1995, tasking Nav- SeaLogCen to design and implement these tools with the stated goals in mind. The remainder of this paper is de- voted to NavSeaLogCen’s efforts to achieve the RAWGs goals and the two products that have emerged from these efforts.

Assessment Approach for RBS CLOSED LOOP FEEDBACK SYSTEM Feedback is a basic premise of any system where we are trying to control an outcome by physically changing a pa- rameter that affects that outcome. Figure l shows a pic- ture of this concept.

Steering a car is a simple example of a closed loop feedback system. The desired output of this closed loop system is to keep the car centered in the proper lane. The car’s position is the “plant output” variable in control the- ory language. The control input is the rotational position of the steering wheel. Under normal conditions the driver has direct control of the steering wheel and alters the steering wheel’s position to achieve the desired output within a certain tolerance l i t . The feedback mechanism is the dnver’s sight. From direct comparison of where the car is with respect to the double yellow line and the outside edge of the road, the dnver makes small adjustments to the control input to achieve the desired plant output. There is always some tiny bit of difference and the driver is continually monitoring the output and adjusting the input. This system is conceptually very simple. What goes on inside the brain and how we learn and adapt to control the car from the time we start driving is another matter.

Since we are continually monitoring in the example above, it is known as a continuous feedback loop. There are systems where continuous feedback is not used. If the dnver of a car was to close their eyes and only open them briefly every five seconds, we have an example of a dis- crete feedback system. We sample the plant output at discrete points in time. With that one sample we make adjustments to the inputs to guide the car to the right course with the eyes closed until we take the next brief peek.

May 1997 N A V A L E N G I N E E R S J O U R N A L

An Approach lo Assessing Readiness Based Logistic Support Policies

Plant output

F I G U R E 1. Closed Loop Feedback System

Although the discrete feedback system is a practice not condoned by the highway department, in the logistics sup- port community it is this type of feedback that tends to be used. Data is collected over a period of time, At a discrete point in time we try to evaluate what has hap- pened over the preceding interval. This interval may be months or years, dependmg on the volume of data available and the quahty of the basic feedback sensing devices. In the Navy there is a reporting (or feedback) mechanism known as the Ships’ 3M Maintenance Data System, (MDS). Other sources of information are used, however, this is the primary feedback mechanism that is used by the assessment process described in this paper.

REFERENCE MODEL CONCEPTS FROM RIP The readiness based sparing process is actually an out- growth of an earlier Navy development called the Readi- ness Improvement Program (RIP). Again using the basic concept of making process improvements based on feed- back from the fleet, the schematic for a closed loop feed- back control system from basic control theory applies. However, it was recognized that the output variable, sys- tem Ao, was not something that could be directly mea- sured on a consistent basis. In order to conduct a valid readiness assessment, it was necessary to construct a reference model which would provide a consistent experi- mental environment. The CNO initiated this program and stated the “objective is to maximize total ship readiness by relating resources to wartime material readiness” [lo]. Models of systems were to be used and simulated in war- time conditions.

The use of reliability block diagrams (RBDs) as a ref- erence model of the weapon system is a primary tenet of the RIE By capturing the mission success criteria of the weapon system and incorporating information with a de- sign reference mission, a simulation approach is the pri- mary method for readiness assessment. “The design ref- erence mission (DRM) provides a consistent basis for

designing for readiness, readiness measurement, and the allocation of spares” [ll].

A sigmficant portion of work in process control methods using a model reference approach involves idendying the values for the parameters used in the model. In Landau 1121, several methods of identifymg model parameters for discrete-time processes are discussed. In the assessment process described in this paper, Bayesian methods are preferred where they can provide an unbiased and consis- tent estimator, given certain conditions apply. NavSea- LogCen is conducting ongoing research in this area as automated methods are adapted. Anecdotal evidence sug- gests there may be situational bias involved at times. This occurs when the organization performing an assessment is influenced by the fleet’s requirements at the time a failure may occur. In short, if a failure occurs while a system is in port or otherwise not required, no downing event observation is logged and this data is not recorded at the equipment level in which the fdure actually oc- curred. Marcus and Tang [131 state that inconsistent re- sults of estimating Ao are attributed to not reporting equipment failure events consistently Use of automation and a reference model approach, if cautiously applied, help minimize most forms of subjective bias.

Development of Automated Software SHIPS’ LOGISTICS INDICATOR COMPUTERIZED REPORT (SLICR) The SLICR can be considered a “first step” in system assessment. It provides a rough, completely automated look at various logistics indicators through tabular and graphical displays (Table 1). The ability to “bore down” through the many levels of system hierarchy is a key fea- ture of this product. For instance, The user may start with a ship class, idenbfy the ship in the class with the most failures, then find the equipment on that ship with the most failures. Also, by charting the latest five years of ship maintenance data by quarter, system trends can be easily observed (Figure 2). The SLICR is available to all users of the Ships’ 3-M system Open Architecture Re- trieval System (OARS) version 2.1.

READINESS ENGINEERING COMPUTERIZED ASSESSMENT PROGRAM (RECAP) RECAP is a prototype assessment tool which evaluates each of the reliability, maintainability, and availability com- ponents of A0 for systems which have been modeled using standard readiness based sparing (RBS) methodology. These components are then used to calculate the system’s estimated wartime Ao using the NavSea TIGER simula- tion program. RECAP also provides a comparison report between past system parameters and the empirical param- eters. This comparison report can be used as a quahty assurance tool for the RBS process (Table 2).

NAVAL ENGINEERS JOURNAL May 1997 197

Afi Approach to Assessing Readiness Based Logistic Support Policies

0 DATA X REGRESSION LINE

F I G U R E 2. Sample SLICR output graph

SLICR Logistics Indicators

Failures

Number of Maintenance Actions

Number of Parts Requisitions Total Ship Force Man-hours Number of Ship Visits Logistics Delay Time Parts Cost Net Effectiveness

COWL Effectiveness

Gross Effectiveness

Maintenance Effectiveness

Ship Steaming Hours

Number of maintenance actions reported as a complete or partial failure. All maintenance actions reported. Primarily used as a data quality check.

The percentage of times an allowed item is onboard when needed. The percentage of times a needed item is on allowance. The percentage of times a needed item is onboard whether or not it is allowed. The percentage of times a maintenance action is completed with onboard spares. Not yet available. Used primarily to indicate system operating tempo and population.

Data Gathering Data gathering is the key to the entire RECAP assessment process. Obviously, the quality of the data going into the computer program directly affects the accuracy of the re- sults. Therefore, a good working knowledge of the equip- ment and its applications is recommended before gathering data. At this time, data is gathered from five sources. Future plans are to integrate databases to create a seam- less process.

RBS Files Three files developed for the RBS process are required to perform system assessment. Even a system that is not spared using RBS can have these files developed in a relatively short time. The RBS files must be complete and accurate prior to the assessment phase although results of the assessment may indicate a need to change values in one or more of the files.

The primary RBS file is the Part File. This file contains important data such as item stock number, rmlitary essen- tiality code (MEC), equipment type (block) number, and quantity installed. The Part File is used to identify failures and apportion them to the correct reliability block or blocks.

The Equipment Type File lists all blocks in a system and their quantities. The Equipment Type File also stores past estimates of MTBE MTTR, and MRRT

May 1997 NAVAL E N G I N E E R S J O U R N A L

An Approach to Assessing Readiness Based Logistic Support Policies

Sample RECAP Report Original Assessed

Eqlhe Nomenclature MTBF MTBF Bayes MTBF

0001 0002 0003 0004 0005 0006 0007 0009 0010 001 1 0012 0014 001 5

Antenna Transmitter Freq Converter Receiver Sig Data Conv Radar Subassembly Test Set Limiter Controller Regulator Monitor Dehydrator Switch

~

30097 2244 31630 3546 1858 89427 1487

393809 198413 89686 36377 75188 200000

4348 1366 19407 4928 3592 47507

No Hits 35638 107012 7 1242

No Hits 2 14667

2808

4884 1372 20426 4896 3562 55128

608477 5889 1 101236 62525 289856 207333

2789

Original M l T R

.12 1.21 .46 .66 .55 .55 .19 1.50 .73 .79 .77 .57 1.99

Assessed MTlR

18.36 1 1.30 7.76 8.14 10.52 2.30 6.63

.oo 4.00 66.00 31.11

.oo 1.33

Assessed GE .70 .66 .6J .36 .53 .33 .44

1 .oo .50 .oo .oo

1 .oo 1 .oo

Assessed MRRT

380 404 428 404 404 404 404 524 380 380 404 404 332

# Failures

47.0830 145.6940 1 1 .oooo 41.9200 57.4450 4.5000 72.1350

.oooo 6.0000 2.0000 3.0000

.oooo 1 .oooo

Hours

204751 199053 21 3486 206608 206368 213782 202557 214668 21 3828 2 14024 213726 21 3726 2 14667

The TIGER Input File contains the design reference mission time line as well as MTBE MTTR, MRRT, and block duty factors for a specific system and ship class. This file is used by the TIGER program to simulate mis- sions and estimate Ao.

Ships’ 3-M Maintenance Data System (MDS) The Maintenance Data System is the Navy’s repository for ships’ maintenance and parts usage data and is the main usage data source for this assessment process. Data is acquired kec t ly from ships’ onboard data processing systems (SNAP) for all systems except for fleet ballistic missile systems and nuclear power plants. Both organi- zational and intermediate level maintenance data are collected.

The first step is to identify the ship(s) to be included in the data and the time frame desired. Only ships with the specific equipment configuration under analysis should be used. This is necessary because steaming hours for the ships requested (if they reported data) will be used as a basis for calculating the equipment’s operating time.

NavSeaLogCen uses Powersoft Infomaker pipelines to import data from the Ships’ 3-M Oracle database into personal computer (PC) or LAN database files for use in RECAE although any ODBC and SQL compliant software may be used. The data is extracted by equipment identi- fication code (EIC), allowance parts list (APL), or both, as well as by ship(s) and time frame. Specifying both EIC and APL will ensure that records with incorrect informa- tion in either of these fields will be extracted. The four PC files necessary for assessment correspond to the maintenance, issue, demand, and narrative tables of the MDS database. See Reference 14 for obtaining detailed descriptions of ships’ 3-M database tables and data elements.

Steaming Hours Data on steaming hours are also resident in the ships’ 3-M database in a table called FUELDATA. Since this table is only updated quarterly, the entire table need only be downloaded quarterly and can be used for all projects. The RECAP program uses ship steaming hours to esti- mate equipment operating hours as detailed later in this paper.

If system operating hours are known from some other source, thls data can be dxectly input into RECAP In this case, use of steaming hours would be unnecessm.

Requisition Response Times Requisition times are a combination of supply delay and transportation time. Two data sources are used to obtain these times:

Military Standard Supply and Transportation Evaluation Procedure (MILSTEP). This system collects requisition data from Navy retail level ships. At this time, programs to query this database and compute mean and median requisition response times for a group of items are run by NavICP-M. Values can be obtained at the reliability block level or for the entire system. These values are then fed into the RECAP program and eventually into the TIGER simulation program. Requisition Response Time Management Infor- mation System (RRTMIS). This system collects re- quisition data from wholesale load-carrying ships (CX CVN, AFS, etc.). For assessment purposes, only the shipping time is used since this data is sketchy in the MILSTEP database.

Alternate Stock Numbers Many times, a part’s stock number will change between the time of the maintenance action and the time the RBS

NAVAL ENGINEERS JOURNAL May 1997 199

An Appmach to Assessing Readiness Based Logistic Support Policies

part file is developed. To account for this situation, a com- puter program was writ ten to obtain alternate or superseded stock numbers from Navy files. RECAP then cross references the old to new stock numbers when per- forming failure assignment. Use of this data is not required but is highly recommended.

Working with Ships’ 3-M Data Many formal and informal studies have concluded that “raw” data in the Ships’ 3-M database must be edited before it can be properly used for equipment assessment. This “scrubbing” has taken the form of simple, automated editing as well as complex, expensive, manual editing. Now, a unique opportunity for automated assessment ex- ists because we have available the validated part files and mission simulation software already being used in the RBS process. Proper use of these assets allows us to iden* failures, apportion them to the proper reliability blocks, and use this dormation to predict availability automati- cally The benefit of this process over past automated assessment is decreased reliance on ship-reported data elements and broad estimation which can often be mis- leading.

Identifying Failures When a ship reports a maintenance action, it cannot be assumed that a system failure has occurred. There are planned maintenance actions, elapsed time meter readmg reports, and minor part failures that do not cause a system downing event. The problem, then, is “How do we find the true failures within the universe of maintenance actions?” Attempts were made to use a data element called Status Code that is reported by the ship and is designed to iden- tify “down” systems. In actuality, this field is subjective in nature and cannot be relied on to identlfy failures.

Another option would be to review the narrative and part replacements from each action, and decide which ac- tions were failures and which were not. Of course, h s is time consuming and did not fit the guidelines for developing a fully automated system. Therefore, the use of RBS part and equipment files (equipment models) was picked as the best way to efficiently identify failures. It then becomes a four-step process to extract actual failures from the “raw” 3-M data. 1. Delete maintenance action records that are canceled

or only reporting time meter readings. These are not used in the failure identification process but will be used to identify beginning steaming hour dates.

2. Delete maintenance actions with no parts identifiable to the system under study While some of these actions may contribute to downtime, they should not influence sparing decisions.

3. Use the RBS part data file to identify failures. This file contains data which specifies whether the failure of a part is critical or non-critical to the operation of the block. When a critical part is used in a non-deleted

maintenance action, a failure will be counted. While only critical parts will count as failures, non-critical parts will be used in the part allotment described below.

4. Use the RBS part data file to distribute failures to the reliability blocks. If a maintenance action is uniquely identifiable to one block, (because one or more parts associated with the action is unique to the block), this is easy However, when parts are common to more than one block, an “allotting” of the failure among the blocks is done. This allotment is based on individual block duty cycles and block populations of the parts, essentially the “probability” that the failure occurred in a partic- ular block.

MTTR The time to repair is obtained from the Ships’ 3-M main- tenance man-hours field which is a validated data element. Since maintenance man-hours are not synonymous with repair hours, man-hours must be divided by a factor that equates to the average number of workers needed to cor- rect a failure. Research indicates that a factor of 1.5 is average for most systems and therefore is used as a de- fault. However, this factor can be manually changed for a specific system if better information is available. Repair hours for the block are then factored by the number of failures assigned to the block.

Operating Hours To properly evaluate system or block mean time between failures (MTBF), the actual operating time of the system or block must be determined. The desired way to do h s would be to have time meters on all equipments and re- ceive timely, accurate reporting of the time meter read- ings. Because meter readings are reported to the Ships’ 3-M database in a freeform narrative format, reliable, au- tomated extraction of this data is impractical. Also, not all systems are required to report meter readings at this time. Therefore, to meet our goals, the operating times must be estimated from ship steaming hours and duty factors from the TIGER input file. The following steps are taken to obtain the operating time estimate: 1. Only steaming hours for those ships with 3-M data as

defined by the criteria for the project are desired. Also, to account for differences in installation dates, only steaming hour data beginning from the first month of 3-M reporting from a particular ship until the last will be used. For example, ship “A’ began reporting 3-M data for an equipment two years ago while ship “B” did not begin reporting until one year ago. The program will count steaming hours for ship “A’ for two years and ship ”B” for one year, automatically (It is assumed that 3-M reporting begins soon after installation of a system.) This feature eliminates the need to tailor and extract steaming hour data for each project.

200 May 1997 N A V A L E N G I N E E R S J O U R N A L

An Approach to Assessing Readiness Based Logistic Support Pollcies

2. Downtime must be subtracted from steaming time be- cause the equipment is not operating during these pe- riods. Downtime can be computed from available data as follows:

Downtime = (# FailuresNMTTR) +’ (# Fail&-es)((MRRT)(l-GE))

In other words, downtime is equal to the number of failures multiplied by the average time to repair each failure plus the number of failures multiplied by the average delay time for those failures requiring off-ship requisitions. Remember, GE is the percentage of times the part needed is onboard the ship so (1-GE) is the percentage of times the part is off-ship. An adjustment for the fact that ships operate differently in wartime than in peacetime must be made ( i.e., they spend a much higher percentage of time underway dur- ing wartime than peacetime). Because we want to spare for wartime, but only have peacetime steaming data, both underway and not underway times must be factored by a wartime operating ratio or weighting fac- tor. To get these factors, the TIGER input file block duty factors for each mission phase are multiplied by the time line hours for that phase, then divided by the total number of underway or not underway hours from the mission time line. The actual (peacetime) underway and not underway hours are then multiplied by the respective weighting factor to simulate a wartime scen- ario. Some downtime must be attributed to underway time and some to not underway time. This is calculated simply, using the formulas:

Dt,, = (Hours UW)(DT) + Total Hours Dt,,, = (Hours NUW)(DT) + Total Hours

where: DT is downtime as computed in step 1 above Therefore, the operating time is estimated by the for- mula:

Operating Time = (Hours UW - DT,,)(WF,,) + (Hours NUW - DT,,)(WF,,,)

where: WF indicates the weighting factor for underway or not underway hours computed in step 3 above.

MTBF Once the operating time and number of failures have been computed, the equipment type MTBF can be computed. The population used is the number of blocks with the same equipment type.

MTBF = (Operating Hours)(Population) -+ No. of Failures

Bayes’ MTBF Using the Bayes’ smoothing technique, past estimates of MTBF are factored together with current observed values

to create a better estimate. This is especially useful when no usage has been reported during the time frame under study since no MTBF could be computed using the equa- tion above. However, the fact that there were no failures is sigrulicant and by using the Bayes’ method a new esti- mate is possible.

Bayes’ MTBF = (Operating Hours)(Population) + Original MTBF + No. of Failures + 1

Gross Effectiveness Gross effectiveness (GE) is defined as the probability that a part is available onboard the ship when required for maintenance. GE is determined by the spares load and is, in fact, used to represent the spares load in the TIGER simulation. For RBS purposes, this value is calculated at the reliability block level. Ships’ 3-M issue records contain a field called Source Code that identifies whether an item was in stock, not in stock, or not allowed aboard the ship at the time it was requested. Since it is machme assigned, the Source Code is consistent across all ships and is not open to misinterpretation. A block gross effectiveness is computed by simply dividing the number of “in stock” Source Codes by the total number of part issues for each reliability block.

Manual Editing While not an original goal of this process, the ability to view and edit Ships’ 3-M data to correct errors was added as an enhancement. Using this unique feature, the user can read narratives, parts usage records, and other per- tinent data elements. Deleting or undeleting of entire rec- ords is possible, as well as addition or deletion of individual parts usage records. It is also possible to reassign fdures between reliability blocks if desired. This gives the user further control over the validity of the results.

TIGER Simulation TIGER is a large and versatile reliability, maintainability, availability (RMA) computer simulation program that was specially created for the study of Navy systems and is capable of representing complex systems under varying operating conditions. For a complete description of the TIGER computer simulation model, see the TIGER Users’ Manual [ 151.

After the parameters MTBE MTTR, GE, and MRRT are determined, these values can be inserted into the TIGER input file, replacing the values already there. Then, by running the simulation, an estimated value of Ao will be calculated. By specifying which values are changed in the TIGER input file, different estimations of Ao can be accomplished. For example, by using assessed values for all four parameters the achieved Ao for the data time period is estimated. If the gross effectiveness for a spe- cific allowance model is used, the result is an estimate of wartime Ao for that set of spares.

NAVAL ENGINEERS JOURNAL May 1997 201

An Approach lo Assessing Readiness Based Logistic Support Policies

Assessment has become an integral part of new processes and systems. In fact, it has been mandated for the RBS process. Automation is one way of decreasing the amounts of time and money spent to perform the needed assess- ment. Through a great deal of trial and error, the SLICR and RECAP programs have been developed and continue to evolve. The programs use very sophisticated algorithms to identify failures and to compute wartime availability Although not a perfect system, the programs meet the goals of a timelx inexpensive method to assess system performance at both the overall and detailed levels. We believe the SLICR and RECAP programs will be useful tools for in service engineering agents and program man- agers to perform system assessment in an easy and effi- cient manner. As these systems are still under develop- ment, NavSeaLogCen is seeking input from organizations that would like to evaluate the tools at this time. .I.

REFERENCES OpNav Instruction 3000.12, “Operational Availability of Equipments and Weapons Systems,” Office of the Chief of Naval Operations, Washington, DC, 29 Dec. 198% DoD Directive 5000.1, “Defense Acquisition,” U.S. De- partment of Defense, Washington, DC, 1996. NavSup Instruction 4442.14A, “Readiness Based Sparing,” Naval Supply Systems Command, Washington, DC, 4 Jan. 1989. “Project Directive for Implementing Readiness Based Spar- ing on DDG 52,” Joint Letter: NavSea 5000, ser# CEQ/ 114, NavSup 5000, Ser# 09017/09, dated 14 Sept. 198% Burdick, Lenny, “Readiness Based Sparing (RBS): From Concept Exploration to Full Scale Development,” Fourth Annual Logistics Symposium, American Society of Naval Engineers, Mechanicsburg, PA, 1990. Fry, Kevin, “Discussion for Proposed Functions for Readi- ness Based Sparing Assessment Group a t NavSea-

Jeff Gibbs HEAD LOGISTICIAN, AUXILIARY AND ENVIRONMENTAL GROUE NAVAL SEA SYSTEMS COMMAND

he subject paper was well written and technically T sufficient. It described the feedback system and modeling for Readiness Based Sparing (RBS) very well. The descriptions of information sources and utilization gave the reader a good understanding of where RBS is at present, and where the authors are talung it using the assessment system. Overall, the information is very helpful.

While reading the paper, I formulated several questions concerning the initial building of an RBS model. As the paper shows, there will be good methods to fine tune RBS models, based on failure data, parts usage etc., once the

LogCen,” white paper dated 11 Sept. 1989, (available through author.) OpNav Notice 5450, “Modification of the Mission Area of Naval Sea Logistics Center (NavSeaLogCen), Mechanics- burg, PA,” Office of the Chief of Naval Operations, Wash- ington, DC, 20 July 1990. NavSea Instruction 5450.47A, “Mission and Functions of the Naval Sea Logistics Center, Mechanicsburg, PA,” Naval Sea Systems Command, Waslungton, DC, 4 Dec. 1990. “Task Assignments from the DDG 51 Class Readiness Based Sparing Board Meeting of 13 Jan. 1994,” Joint Letter, NavSup 5000 Ser# Sup41/001, NavSea 5000 Ser# SeaOW 018, Arlington, VA, dated 9 Feb. 1994.

[lo] “Ship Readiness Improvement Program (RIP),” Letter ser# 03/6U3896911 from the Chief of Naval Operations, Washington, DC, 18 Nov. 1986.

[ll] ‘‘Readiness Improvement Program (RIP) Design Refexence Missions (DRM) Report,” Confidential Report ser# 05MR- C029-86A, Naval Sea Systems Command, Washugton, DC, Oct. 1989.

[la] Landau, Yoan D., Adaptive Control: The Model Reference Approach, Marcel Dekker Inc., New York, 1979.

[13] Marcus, Alan J. and Victor Tang, “Ship Material Condition Data Base Project: Summary Report,” Center for Naval Analyses, Alexandria, VA, Report CRM 91-151, Oct. 1991.

[14] “Ships’ 3-M Database Reference Manual,” Naval Sea Lo- gistics Center, 16 Feb. 1996.

[15] “TIGER Users’ Manual,’’ NavSea Technical Report No. TE660-AA-NMD-010, Naval Sea Systems Command, Washington, DC, Sep. 198%

:

Jack Dunst has been employed ly the Naval Sea Logistics Center in Mechanicsburg, Pennsylwlnia, since 1984. He holds a bachelm of science degree in electrical engineering from The Pennsylwlnia State Universi&. Kevin Fry has been employed by the Nawll Sea Logistics Center in Mechanacsburg, Pennsylvania, since 1988. He holds a bache- lor of science degree in actuarial science from The Pennsylvania State Universi&.

equipment is fielded using RBS as the provisioning model. However, I have some questions in regard to the bddmg of the model prior to any feedback

1). Acquisition reform is driving the use of performance specifications for most acquisition items. Given this, many items are bought based on performance, including opera- tion availability As long as the end item meets the perfor- mance requirements, the components can vary. Thus, my question is, how will the assignment of MTBFs to Relia- bility Block Diagrams be accomplished?

2). Similar to the above, my next question deals with assigning failure rates to individual parts of components, and assemblies add up to an end item that meets reliability performance requirements. Is this something that the manufacturer will determine and be provided as part of the contract?

3). If failure rates and MTBFs are provided, how will component changes be tracked, and how will configuration management be accomplished?

202 May 1997 NAVAL ENGINEERS JOURNAL