Reliability Analysis: Statistics

You can select various statistics that describe your scale, items and the interrater agreement to determine the reliability among the various raters. Statistics that are reported by default include the number of cases, the number of items, and reliability estimates as follows:

Alpha models
Coefficient alpha; for dichotomous data, this is equivalent to the Kuder-Richardson 20 (KR20) coefficient.
Omega models
Estimation of McDonald’s omega to evaluate reliability.
Split-half models
Correlation between forms, Guttman split-half reliability, Spearman-Brown reliability (equal and unequal length), and coefficient alpha for each half.
Guttman models
Reliability coefficients lambda 1 through lambda 6.
Parallel and Strict parallel models
Test for goodness of fit of model; estimates of error variance, common variance, and true variance; estimated common inter-item correlation; estimated reliability; and unbiased estimate of reliability.
Descriptives for
Produces descriptive statistics for scales or items across cases.
Item
Produces descriptive statistics for items across cases.
Scale
Produces descriptive statistics for scales.
Scale if item deleted
Displays summary statistics comparing each item to the scale that is composed of the other items. Statistics include scale mean and variance if the item were to be deleted from the scale, correlation between the item and the scale that is composed of other items, and Cronbach's alpha if the item were to be deleted from the scale.
Summaries
Provides descriptive statistics of item distributions across all items in the scale.
Means
Summary statistics for item means. The smallest, largest, and average item means, the range and variance of item means, and the ratio of the largest to the smallest item means are displayed.
Variances
Summary statistics for item variances. The smallest, largest, and average item variances, the range and variance of item variances, and the ratio of the largest to the smallest item variances are displayed.
Correlations
Summary statistics for inter-item correlations. The smallest, largest, and average inter-item correlations, the range and variance of inter-item correlations, and the ratio of the largest to the smallest inter-item correlations are displayed.
Covariances
Summary statistics for inter-item covariances. The smallest, largest, and average inter-item covariances, the range and variance of inter-item covariances, and the ratio of the largest to the smallest inter-item covariances are displayed.
Inter-Item
Produces matrices of correlations or covariances between items.
ANOVA Table
Produces tests of equal means.
F test
Displays a repeated measures analysis-of-variance table.
Friedman chi-square
Displays Friedman's chi-square and Kendall's coefficient of concordance. This option is appropriate for data that are in the form of ranks. The chi-square test replaces the usual F test in the ANOVA table.
Cochran chi-square
Displays Cochran's Q. This option is appropriate for data that are dichotomous. The Q statistic replaces the usual F statistic in the ANOVA table.
Interrater Agreement: Fleiss' Kappa
Assesses the interrater agreement to determine the reliability among the various raters. A higher agreement provides more confidence in the ratings reflecting the true circumstance. The generalized unweighted kappa statistic measures the agreement among any constant number of raters while assuming:
  • At least two item variables must be specified to run any reliability statistic.
  • At least two ratings variables must be specified.
  • The variables selected as items can also be selected as ratings.
  • There is no connection between raters.
  • The number of raters is a constant.
  • Each subject is rated by the same group containing only a single rater.
  • No weights can be assigned to the various disagreements.
Display agreement on individual categories
Specifies whether or not to output the agreement on individual categories. By default, the output suppresses the estimation on any individual categories. When enabled, multiple tables display in the output.
Ignore string cases
Controls whether or not the string variables are case sensitive. By default, string rating values are case sensitive.
String category labels are displayed in uppercase
Controls whether the category labels in the output tables are displayed in uppercase or lowercase. The setting is enabled by default, which displays the string category labels in uppercase.
Asymptotic significance level (%)
Specifies the significance level for the asymptotic confidence intervals. 95 is the default setting.
Missing
Exclude both user-missing and system missing values
Controls the exclusion of user-missing and system-missing values. By default, user-missing and system-missing values are excluded.
User-missing values are treated as valid
When enabled, treats user-missing and system-missing values as valid data. The setting is disabled by default.
Hotelling's T-square
Produces a multivariate test of the null hypothesis that all items on the scale have the same mean.
Tukey's test of additivity
Produces a test of the assumption that there is no multiplicative interaction among the items.
Intraclass correlation coefficient
Produces measures of consistency or agreement of values within cases.
Model
Select the model for calculating the intraclass correlation coefficient. Available models are Two-Way Mixed, Two-Way Random, and One-Way Random. Select Two-Way Mixed when people effects are random and the item effects are fixed, select Two-Way Random when people effects and the item effects are random, or select One-Way Random when people effects are random.
Type
Select the type of index. Available types are Consistency and Absolute Agreement.
Confidence interval (%)
Specify the level for the confidence interval. The default is 95%.
Test value
Specify the hypothesized value of the coefficient for the hypothesis test. This value is the value to which the observed value is compared. The default value is 0.

Specifying Statistics settings

This feature requires the Statistics Base option.

  1. From the menus choose:

    Analyze > Scale > Reliability Analysis...

  2. In the Reliability Analysis dialog, click Statistics.