l6s best sharing experience msa

20
L6S Best sharing experience Attribute Agreement Analysis Alina Lisineanu GB L6S certified

Upload: alina-valentina-lisineanu

Post on 02-Jun-2015

396 views

Category:

Business


5 download

DESCRIPTION

This presentation is about sharing my experience on how to use a Measurement System Analysis on a Lean Six Sigma project and, most importantly, how to interpret statistical results from Minitab.

TRANSCRIPT

Page 1: L6s best sharing experience MSA

L6S Best sharing experienceAttribute Agreement AnalysisAlina Lisineanu

GB L6S certified

Page 2: L6s best sharing experience MSA

AGENDA

ObjectiveWhat is MSAContinuous vs. Attribute MSAProject charterMSAPurpose of Attribute Agreement AnalysisPrepare the study

Collect study resultsPrepare Minitab dataRun Minitab toolAttribute Agreement Analysis - Graphs

Fleiss Kappa Statistics

Minitab results – interpretation of all 4 levels of analysis

Kendall’s correlation coefficient

Page 3: L6s best sharing experience MSA

Objective

- Share the experience of using Attribute Agreement Analysis tool on a L6S project

- Present data and analysis that were not included in the GB report out- Learn how to conduct real Gage R&R study, use Minitab and interpret

statistical results

Page 4: L6s best sharing experience MSA

Q: What is MSA?

A: It’s a set of techniques that allow us to answer the question: If I use this GAGE to measure, how much can I trust the measurements I get? A measurement system analysis (MSA) evaluates the test method, measuring instruments,

and the entire process of obtaining measurements to ensure the integrity of data used for analysis and to understand the implications of measurement error for decisions made about a product or process.

MSA is an important element of Six Sigma methodology and of other quality management systems.

Factors affecting measurement systems:

1. Equipment: measuring instrument, calibration, etc.

2. People: operators, training, education, skill, care

3. Process: test method, specification

4. Samples: materials, items to be tested, sampling plan, etc

5. Environment: temperature, humidity, conditioning

6. Management: training programs, metrology system, support people, support of quality management systems

Conduct MSA study

Page 5: L6s best sharing experience MSA

Continuous vs. attribute MSA

Depending on the type of data different analysis can be used. Data type can be continuous (time, money, weight, height, length or temperature, etc.) or attribute (count data, Yes or No, Good or Bad, etc.)

When the measure is continuous data a measurement device (gage, gauge) is also involved. In this situation, a Gage R&R Study is done, where R&R stands for repeatability and reproducibility

A Gage R&R is used to estimate the total variation, the part-to-part variation, and the variation due to measurement system.

When the measure is attribute an Attribute Gage R&R (or so called Attribute Agreement Analysis) is used to estimate the total variation.

In the transactional world, most data is attribute and most ‘gages’ are people. When the gage is a person the term ‘’appraiser” or “operator” is used.

Page 6: L6s best sharing experience MSA

Project Charter – GB project

Project Title: •Improve EMEA VO invoicing reconciliation processes

Project Definition: •Reduce by 20% the time spent with manual checks and invoice fallouts by improving the reconciliation processes for each of the service providers by end of July 2012 •Standardize reconciliation process across route to markets by end of July 2012•Route to markets in scope: EMEA Volume Indirect and Volume Direct Operations

Primary Metric / Goal: •Reduce time spent with reconciliation process activities by 20%, from 95h/week to 76h/week (358h/month to 286h/month)

Secondary Metric / Goal: •Reduce number of invoice fallouts at service providers by 20% from 693 fallouts/month to 554 fallouts/month

Page 7: L6s best sharing experience MSA

Measurement System Analysis – GB project

Data Integrity Review

Time in hours per week to complete the invoice reconciliation process was manually measured by L1 team by each activity step performed in the process on a daily basis. Human decision measurement step involved.

Monthly missing invoices at service providers is an accurate measurement as it is directly pulled from an Access Database which compares SAP billing reports with Service provider billing reports.

MSA Consideration

No MSA was conducted for the monthly missing invoices at service providers as the data used is issued directly from the internal company and service provider systems.

MSA was considered for the time to complete reconciliation process to determine if it is an accurate measurement. Attribute Agreement Analysis was completed to confirm if desired accuracy level is met.

Wanted accuracy of 95 percent

Page 8: L6s best sharing experience MSA

Purpose of Attribute Agreement Analysis In the ‘Improve EMEA VO invoice reconciliation processes’ project, the primary metric was

about the time spent with overall reconciliation activities, therefore the data type is continuous.

The data was collected from L1 support team feedback, without using a measuring system like systematic time stamp or stop watch. So how do we ensure that we can trust the data, decisions, when the data is based on a person judgment rather than an objective instrument. One way that works very well is called Attribute Agreement Analysis.

Attribute agreement is a method of comparing the responses made by appraisers (operators) when judging the characteristics of interest. There are four possible levels of analysis of the responses:

1. Appraisers against themselves - repeatability2. Appraiser against other appraiser - reproducibility

3. Appraiser against a standard (if one exists) 4. Overall appraiser capability

The present Six Sigma Project case study helps explaining the tool and how to interpret results

Page 9: L6s best sharing experience MSA

Prepare the study To prepare the study you need to establish the number of sample parts, the number of

repeated readings, and the number of operators that will be used. In our case the following was used.The standard was set by the expert's measurement taken, the expert being the person owning the process

(Alina Lisineanu)The 3 operators (appraisers) represent      

 Appraiser 1 - as the original reconciliation processor => the person performing the process steps on a day to day basis (high level of experience)

 Appraiser 2 - as the second reconciliation processor => the person occasionally performing the process steps (medium to high level of experience)

 Appraiser 3 - as the third reconciliation processor => the person trained on the process steps but actually performing the process steps for the first time (low level of experience)

                      The samples were established as the main process steps involving manual work. Each sample was measured 3 times on a daily basis.           Each appraiser was asked to measure the average time taken to perform each of the 8 process steps and rate them using the following options:      a) less than 15 min        b) between 15 - 30 min        c) between 30 - 45 min        d) between 45 - 60 min        e) between 60 - 90 min        f) more than 90 min  

Page 10: L6s best sharing experience MSA

Collect Study Results

You need to keep track of the study results before moving on with Minitab. It’s important to know how to arrange the data in Minitab so the tool results are the right ones.

The way we have done it in the Six Sigma project was to collect each appraiser’s file with the responses for each trial in an excel file. You can see below how that was done.

Table 1: Trial No.1Sample Date Process step

Appraiser 1

Appraiser 2

Appraiser 3 Standard

1 2-Jan-12 Format SP billing reports d d d d2 3-Jan-12 Format SAP billing reports b b b b

3 3-Jan-12Upload SP and SAP reports in database b b b b

4 5-Jan-12 Run all queries a a a a

5 2-Jan-12Manually review reconciliation report e e f e

6 5-Jan-12 Retrigger failed documents b b c b

7 6-Jan-12Identify business and ask for correction b c c b

8 4-Jan-12Operations support (per request) a a a a

Table 2: Trial No.2Sample Date Process step

Appraiser 1

Appraiser 2

Appraiser 3 Standard

110-Jan-

12 Format SP billing reports d d d d

2 9-Jan-12 Format SAP billing reports b b b b

3 9-Jan-12Upload SP and SAP reports in database b b b b

412-Jan-

12 Run all queries a a a a

511-Jan-

12Manually review reconciliation report e e f e

611-Jan-

12 Retrigger failed documents b b d b

710-Jan-

12Identify business and ask for correction b c c b

813-Jan-

12 Operations support (per request) a a a a

Table 3: Trial No.3Sample Date Process step

Appraiser 1

Appraiser 2

Appraiser 3 Standard

1 19-Jan-12Format SP billing reports d d d d

2 19-Jan-12Format SAP billing reports b b b b

3 15-Jan-12Upload SP and SAP reports in database b b b b

4 16-Jan-12 Run all queries a a a a

5 16-Jan-12Manually review reconciliation report e e d e

6 18-Jan-12Retrigger failed documents b b d b

7 17-Jan-12Identify business and ask for correction b c c b

8 17-Jan-12Operations support (per request) a a a a

Page 11: L6s best sharing experience MSA

Prepare Minitab data The next step was to arrange the data from the 3 tables in a way that we can run the

Attribute Agreement Analysis tool in Minitab It’s easier to consolidate the data in excel and then copy paste it in Minitab

Page 12: L6s best sharing experience MSA

Run Minitab tool As mentioned previously you can copy paste the data in Minitab or directly insert it there, as

it’s best for you. So here are the instructions to run an Attribute Agreement Analysis in Minitab

Check here if the data is a type of ranking in degree, numerical or verbal (like 1 through 10, A,B, C, D etc.)

If you have your spreadsheet arranged with all responses in one column, sample label in another and appraiser level in a different column, use this section

Page 13: L6s best sharing experience MSA

Attribute Agreement Analysis - GraphsThere are three vertical lines, one for each appraiser. The blue dot shows the level of agreement within their own assessment (left graph), and against the standard (right graph)

The Original Recon Processor, for example, never changed his mind; he always assessed the average time spent per each process step the same way (100%). As well, he is 100% in agreement with the expert which proves the high level of experience on the recon processes.

The 2nd Recon Processor, same as the original one, never changed his mind and always assessed the average time spent per each process step the same way (100%). On the other hand, he is only 87.5% in agreement with the expert due to fact that he is occasionally performing the process acting as back-up person.

The 3rd Recon Processor is pretty consistent too, though not as well as the first 2, being 75% in agreement with himself. However, nothing in line with the expert. He agrees only 62.5% of the time with the expert. The case of the 3rd expert can be explained that he is a new comer and follows the training documentation to perform the process steps, in which case an improvement is desired on the training level and documentation in order to increase the assessment agreement.

Page 14: L6s best sharing experience MSA

Fleiss’ Kappa Statistics Next we move on to interpret the session window statistics from Minitab, but before that

here’s a brief explanation of the Kappa Statistics. The basic for the Kappa statistic is a comparison to random chance. Imagine flipping a coin to

make a quality decision on a process, that’s random chance. Kappa compares the results gathered through the study with the possibility that those results could be randomly generated as if flipping a coin or rolling a die.

Kappa ranges from -1 to +1 with a value of 0 indicating random chance. If kappa = 1, there is a perfect agreement. If kappa = 0, the agreement is the same as would be expected by chance. The stronger the agreement, the higher the value of kappa. Negative values occur when agreement is weaker than expected by chance.

The Hypothesis regarding Kappa goes as follows:

H0: The agreement within appraiser is due to chance

H1: The agreement within appraiser is not due to chance The p-value provides the likelihood of obtaining the sample, with its kappa statistics, if the null

hypothesis is true. If the p-value is less than or equal to a predetermined level of significance (alpha level), reject the null hypothesis and conclude in the favor of the alternative hypothesis.

Alpha = 0.05 for a 95% level of confidence

Page 15: L6s best sharing experience MSA

Did each appraiser rate the average time per process step consistently across the trials?

Minitab results– within appraisers

Appraiser 1 and 2 agreed 100% between their three trials with themselves. Appraiser 3 however, only agreed 75% between the trials with himself.

Appraiser 1 didn’t choose ratings c or f, so Kappa could not be calculated for those options. Overall the appraiser 1 agreed with themselves 100% so the Kappa for the ‘within’ portion is 1 which indicates perfect agreement. And P-value <0.05 means we can reject the null hypothesis and conclude the agreement within appraiser is not due to chance.Appraiser 2 didn’t choose rating f , so Kappa could not be calculated for that one. Overall the appraiser 2 agreed with themselves 100% so the Kappa for the ‘within’ portion is 1 which indicates perfect agreement. And P-value <0.05 means we can reject the null hypothesis and conclude the agreement within appraiser is not due to chance.Appraiser 3 didn’t choose rating e , so Kappa could not be calculated for that one. Overall the appraiser 3 agreed with themselves 75% so the Kappa for the ‘within’ portion is 0.78571 which indicates strong agreement. And P-value <0.05 means we can reject the null hypothesis and conclude the agreement within appraiser is not due to chance.

Page 16: L6s best sharing experience MSA

How did each appraiser rate the average time per process step against the standard?

Minitab results – each appraiser vs standard

Appraiser 1 agrees 100% of the time with the standard, while appraiser 2 and 3 agree only 87.5%, respectively 62.5%, with the known standard.

For appraiser 1 all responses are in perfect agreement with the standard, kappa for the ‘vs standard’ portion is 1. Exception those 2 options for which kappa cannot be computed. P-value < 0.05For appraiser 2 not all responses are in perfect agreement with the standard, but the overall kappa for the ‘vs standard’ portion is 0.82418 which indicates strong agreement. There is a negative value for option c, indicating that the result is worse than what would be expected by chance. The p-value for this option (0.06280 > 0.05) means that we accept the null hypothesis, however the overall P-value for appraiser 2 < 0.05For appraiser 3 not all responses are in perfect agreement with the standard and the overall kappa for the ‘vs standard’ portion is 0.49634 which indicates poor agreement and improvement is required. There are 2 negative values for option c and e, indicating that the result is worse than what would be expected by chance. The p-value for the 2 options is higher than 0.05 which means that we accept the null hypothesis, however the overall P-value for appraiser 3 < 0.05

Page 17: L6s best sharing experience MSA

How did each appraiser rate the average time per process step against the other appraisers?

Minitab results – between appraisers

All appraisers agree with each other 62.5% for this study, with a 95% confidence level that they will agree with each other between 24.49% and 91.48% of the time.

The overall kappa for all 3 appraiser is 0.73217 which indicates good agreement and the p-value < 0.05 meaning the result is not by chance. The agreement between the appraiser is perfect when a process step was rated with option a and f, a good agreement when they chose option b and d, and improvement is required for options c and e.

Page 18: L6s best sharing experience MSA

How did all appraisers rate the average time per process step against the standard?

Minitab results – all appraisers vs standard

All appraisers agree with the standard 62.5% for this study, with a 95% confidence level that they will agree with the standard between 24.49% and 91.48% of the time.

The overall kappa for all appraiser is 0.77351 which indicates good agreement and the p-value < 0.05 meaning the result is not by chance. The agreement of all appraiser against the standard is perfect when a process step was rated with option a, a good agreement when they chose option b, d, e. Kappa cannot be computed for options c and f as these responses are not among the standard.

Page 19: L6s best sharing experience MSA

Kendall’s correlation coefficient If the standard is known and the data are ordered, Minitab computes Kendall’s coefficient of

concordance, which can range from -1 to 1. Positive values indicate positive association, and negative values indicate negative association. In addition, the higher the magnitude, the stronger the association.

Within AppraisersKendall's Coefficient of Concordance

Appraiser Coef Chi - Sq DF PAppraiser 1 1.00000 21.0000 7 0.0038Appraiser 2 1.00000 21.0000 7 0.0038Appraiser 3 0.98194 20.6208 7 0.0044

Each Appraiser vs StandardKendall's Correlation Coefficient

Appraiser Coef SE Coef Z PAppraiser 1 1.00000 0.166667 5.92857 0.0000Appraiser 2 0.93541 0.166667 5.54106 0.0000Appraiser 3 0.86947 0.166667 5.14540 0.0000

Between Appraisers Kendall's Coefficient of Concordance

Coef Chi - Sq DF P0.948595 59.7615 7 0.0000

All Appraisers vs StandardKendall's Correlation Coefficient

Coef SE Coef Z P0.934962 0.0962250 9.67517 0.0000

The Kendall’s Coefficient of Concordance for all 4 levels of analysis is 1, indicated perfect agreement or is very close to 1, also indicating very good agreement.

Using kappa statistics, we found previously that appraiser 2 and 3 did not always apply certain ratings with absolute consistency. However, the Kendall’s coefficients indicate that these rating discrepancies are not major. That is, the appraisers did not seriously misclassify average time per process steps.

Page 20: L6s best sharing experience MSA

Thank you