Reliability Analysis
Reliability analysis allows you to study the properties of measurement scales and the items that compose the scales. The Reliability Analysis procedure calculates a number of commonly used measures of scale reliability and also provides information about the relationships between individual items in the scale. Intraclass correlation coefficients can be used to compute interrater reliability estimates.
Reliability analysis also provides Fleiss' Multiple Rater Kappa statistics that assess the interrater agreement to determine the reliability among the various raters. A higher agreement provides more confidence in the ratings reflecting the true circumstance. The Fleiss' Multiple Rater Kappa options are available in the Reliability Analysis: Statistics dialog.
 Example
 Does my questionnaire measure customer satisfaction in a useful way? Using reliability analysis, you can determine the extent to which the items in your questionnaire are related to each other, you can get an overall index of the repeatability or internal consistency of the scale as a whole, and you can identify problem items that should be excluded from the scale.
 Statistics
 Descriptives for each variable and for the scale, summary statistics across items, interitem correlations and covariances, reliability estimates, ANOVA table, intraclass correlation coefficients, Hotelling's T ^{2}, Tukey's test of additivity, and Fleiss' Multiple Rater Kappa.
 Models
 The following models of reliability are available:
 Alpha (Cronbach)
 This model is a measure of internal consistency based on the average interitem correlation.
 Omega (McDonald's)

This model assumes that the model is unidimensional including a single factor with no local item dependence in the form of error covariances. The model implies that the covariance of the two different items is the product of their loadings.
 Splithalf
 This model splits the scale into two parts and examines the correlation between the parts.
 Guttman
 This model computes Guttman's lower bounds for true reliability.
 Parallel
 This model assumes that all items have equal variances and equal error variances across replications.
 Strict parallel
 This model makes the assumptions of the Parallel model and also assumes equal means across items.
Reliability Analysis data considerations
 Data
 Data can be dichotomous, ordinal, or interval, but the data should be coded numerically.
 Assumptions
 Observations should be independent, and errors should be uncorrelated between items. Each pair
of items should have a bivariate normal distribution. Scales should be additive, so that each item
is linearly related to the total score. The following assumptions apply for Fleiss' Multiple Rater
Kappa statistics:
 At least two item variables must be selected to run any reliability statistic.
 When at least two ratings variables are selected, the Fleiss' Multiple Rater Kappa syntax is pasted.
 There is no connection between raters.
 The number of raters is a constant.
 Each subject is rated by the same group containing only a single rater.
 No weights can be assigned to the various disagreements.
 Related procedures
 If you want to explore the dimensionality of your scale items (to see whether more than one construct is needed to account for the pattern of item scores), use factor analysis or multidimensional scaling. To identify homogeneous groups of variables, use hierarchical cluster analysis to cluster variables.
To obtain a Reliability Analysis
This feature requires the Statistics Base option.
 From the menus choose:
 Select two or more variables as potential components of an additive scale.
 Choose a model from the Model dropdown list.
 Optionally, click Statistics to select various statistics that describe your scale items or interrater agreement.
This procedure pastes RELIABILITY command syntax.