Attribute MSA is also known as Attribute Agreement Analysis. Use the Ordinal option if the assessed result is numeric ordinal (e.g., 1, 2, 3, 4, 5). There must be at least 3 response levels in the assessed result, otherwise it is binary. Examples of ordinal responses used elsewhere in this workbook include:
An Ordinal Attribute MSA study should be done prior to formal ordinal data collection for use in hypothesis testing, regression or design of experiments.
Tip: While this report is quite extensive, a
quick assessment of the attribute measurement system can be made by
viewing the Kendall Concordance and Kendall Correlation color
highlights: Green - very good agreement; Yellow - marginally
acceptable, improvement should be considered; Red - unacceptable.
Further details on the Kendall Coefficients are given below.
Tip: Fleiss Kappa and Percent Agreement are included in the report
for completeness but not recommended for use with Ordinal response
data because they treat each response level as nominal. Kendalls
Concordance and Correlation take the order of the data into account,
so a deviation of 1 is not as bad as a deviation of 2 or more. See
Attribute MSA Nominal for a discussion of the Fleiss Kappa
report.
Tip: Fleiss Kappa and
Percent Agreement are included in the report for
completeness but not recommended for use with Ordinal response data
because they treat each response level as nominal. Kendalls
Concordance and Correlation take the order of the data into account,
so a deviation of 1 is not as bad as a deviation of 2 or more. See
Attribute MSA Nominal for a discussion of the
Fleiss Kappa report.
Kendall's Coefficient of Concordance (Kendall's W) is a measure of association for discrete ordinal data, used for assessments that do not include a known reference standard. Kendalls coefficient of concordance ranges from 0 to 1: A coefficient value of 1 indicates perfect agreement. If the coefficient = 0, then the agreement is random, i.e., the same as would be expected by chance. Rule-of-thumb interpretation guidelines: >= 0.9 very good agreement (green); 0.7 to < 0.9 marginally acceptable, improvement should be considered (yellow); < 0.7 unacceptable (red).
Kendall's Concordance P-Value:
H0: Kendall's Coefficient of Concordance = 0. If P-Value < alpha
(.05 for specified 95% confidence level), reject H0 and conclude
that agreement is not the same as would be expected by chance.
Significant P-Values are highlighted in red. See Appendix Kendalls Coefficient of Concordance for further details on the Kendall Concordance calculations and rule-of-thumb interpretation guidelines.
Kendall's Concordance LC (Lower Confidence) limit and
Kendall's
Concordance UC (Upper Confidence) limit cannot be solved
analytically, so are estimated using bootstrapping. Interpretation Guidelines:
Concordance lower confidence limit >= 0.9: very good agreement.
Concordance upper confidence limit < 0.7: the attribute agreement is
unacceptable. Wide confidence intervals indicate that the sample
size is inadequate.
The Within Appraiser Agreement for Joe is marginal, Moe is
unacceptable and Sue is very good.
The Between Appraiser Agreement is unacceptable.
Kendalls Correlation/CI Each Appraiser vs. Standard
Effectiveness Graph:
Kendall's Correlation Coefficient (Kendall's tau-b) is a measure
of association for discrete ordinal data, used for assessments that
include a known reference standard. Kendalls correlation
coefficient ranges from -1 to 1: A coefficient value of 1 indicates
perfect agreement. If the coefficient = 0, then the agreement is
random, i.e., the same as would be expected by chance. A coefficient
value of -1 indicates perfect disagreement. Rule-of-thumb
interpretation guidelines: >= 0.8 very good agreement (green); 0.6
to < 0.8 marginally acceptable, improvement should be considered
(yellow); < 0.6 unacceptable (red).
Kendall's Correlation P-Value: H0:
Kendall's Correlation Coefficient = 0. If P-Value < alpha (.05 for
specified 95% confidence level), reject H0 and conclude that
agreement is not the same as would be expected by chance.
Significant P-Values are highlighted in red.
Kendall's Correlation LC
(Lower Confidence) and Kendall's Correlation UC
(Upper Confidence) limit use a normal approximation. Interpretation
Guidelines: Correlation lower confidence limit >= 0.8: very good
agreement. Correlation upper confidence limit < 0.6: the attribute
agreement is unacceptable. Wide confidence intervals indicate that
the sample size is inadequate.
Tip: Kendalls Correlation
values in the Effectiveness tables are very similar to those in the
Agreement tables (the slight difference is due to average Kendall
for unstacked versus Kendall for stacked data). This is why the
Kendalls Correlation/CI Each Appraiser vs.
Standard Agreement graph is not shown. It would essentially
be a duplicate of the Kendalls Correlation/CI Each
Appraiser vs. Standard Effectiveness graph.
Appraiser Joe has marginal agreement versus
the standard values. Appraiser Moe has unacceptable agreement to the
standard. Sue has very good agreement to the standard.
Overall, the appraisers have marginal
agreement to the standard.
Note that the Percent Agreement results in
All Appraisers vs. Standard Agreement Table
show only 2% agreement! This is due to the requirement that all
appraisers agree with the standard across all trials for a 5 level
response, which is very unlikely to occur. This highlights the
problem with using Percent Agreement in an Ordinal MSA. Kendalls
coefficients are the key metric to assess an Ordinal MSA.
Effectiveness and Misclassification
Summary is a summary table of all appraisers correct
rating counts and misclassification counts compared to the known
reference standard values.
Attribute MSA Data is a summary showing the original data in
unstacked format. This makes it easy to visually compare appraiser
results by part. If a reference standard is provided, the cells are
color highlighted as follows: absolute deviation = 0 (green);
absolute deviation = 1 (yellow); absolute deviation >=2 (red):
In conclusion, this measurement system is marginal and should be improved. Appraiser Moe needs training and Appraiser Joe needs a refresher. Sue has very good agreement based on Kendalls Concordance and Correlation, but would have been considered marginal based on Kappa (< .9) and Percent Effectiveness (< 95%). As discussed above, Kappa and Percent Effectiveness do not take the order of the response data into account, so are not as useful as Kendalls coefficients in an Ordinal MSA study.