Reliability Analysis

Reliability analysis allows you to study the properties of measurement scales and the items that compose the scales. The Reliability Analysis procedure calculates a number of commonly used measures of scale reliability and also provides information about the relationships between individual items in the scale. Intra-class correlation coefficients can be used to compute inter-rater reliability estimates.

Reliability analysis also provides Fleiss' Multiple Rater Kappa statistics that assess the interrater agreement to determine the reliability among the various raters. A higher agreement provides more confidence in the ratings reflecting the true circumstance. The Fleiss' Multiple Rater Kappa options are available in the Reliability Analysis: Statistics dialog.

Does my questionnaire measure customer satisfaction in a useful way? Using reliability analysis, you can determine the extent to which the items in your questionnaire are related to each other, you can get an overall index of the repeatability or internal consistency of the scale as a whole, and you can identify problem items that should be excluded from the scale.
Descriptives for each variable and for the scale, summary statistics across items, inter-item correlations and covariances, reliability estimates, ANOVA table, intraclass correlation coefficients, Hotelling's T 2, Tukey's test of additivity, and Fleiss' Multiple Rater Kappa.
The following models of reliability are available:
Alpha (Cronbach)
This model is a measure of internal consistency based on the average inter-item correlation.
Omega (McDonald's)

This model assumes that the model is uni-dimensional including a single factor with no local item dependence in the form of error covariances. The model implies that the covariance of the two different items is the product of their loadings.

This model splits the scale into two parts and examines the correlation between the parts.
This model computes Guttman's lower bounds for true reliability.
This model assumes that all items have equal variances and equal error variances across replications.
Strict parallel
This model makes the assumptions of the Parallel model and also assumes equal means across items.

Reliability Analysis data considerations

Data can be dichotomous, ordinal, or interval, but the data should be coded numerically.
Observations should be independent, and errors should be uncorrelated between items. Each pair of items should have a bivariate normal distribution. Scales should be additive, so that each item is linearly related to the total score. The following assumptions apply for Fleiss' Multiple Rater Kappa statistics:
  • At least two item variables must be selected to run any reliability statistic.
  • When at least two ratings variables are selected, the Fleiss' Multiple Rater Kappa syntax is pasted.
  • There is no connection between raters.
  • The number of raters is a constant.
  • Each subject is rated by the same group containing only a single rater.
  • No weights can be assigned to the various disagreements.
Related procedures
If you want to explore the dimensionality of your scale items (to see whether more than one construct is needed to account for the pattern of item scores), use factor analysis or multidimensional scaling. To identify homogeneous groups of variables, use hierarchical cluster analysis to cluster variables.

To obtain a Reliability Analysis

This feature requires the Statistics Base option.

  1. From the menus choose:

    Analyze > Scale > Reliability Analysis...

  2. Select two or more variables as potential components of an additive scale.
  3. Choose a model from the Model drop-down list.
  4. Optionally, click Statistics to select various statistics that describe your scale items or interrater agreement.

This procedure pastes RELIABILITY command syntax.