*ARTICLE *
Schaffer, S. P., & Keller, J. (2003). Measuring the results of performance improvement interventions. Performance Improvement Quarterly, 16(1), 73–92.
* SYNOPSIS *
Despite the general belief that human performance technology (HPT) practitioners should conduct results-based evaluations (Level 4) of their interventions, surveys of those practitioners have repeatedly shown that relatively few actually do so. The authors chose to examine how HPT practitioners viewed the attributes of a results-based evaluation, and whether there was a correlation between each attribute and the frequency with which results-based evaluations were conducted. The authors identified five attributes which related to the evaluation itself, plus two attributes linked to organizational factors.
.
* RESEARCH QUESTIONS *
The authors posed two research questions:
1. What is the frequency of use of results-oriented evaluation among ISPI professionals who have at least a basic awareness or knowledge of this type of evaluation?
2. What is the relationship between ISPI members’ frequency of use, perception of attributes, and organizational variables relative to Level 4 evaluation?
The evaluation process attributes selected by the authors were as follows:
1. The relative advantage of Level 4 compared to current practice
2. Trialability, or degree to which Level 4 may be experimented with on a trial basis
3. Compatibility of Level 4 with current organizational practices
4. Complexity of implementing Level 4 evaluation
5. Observability of results of Level 4 evaluation to others
The authors also included two organizational factors:
1. Stakeholder participation in adoption, adaptation, implementation (organizational factor)
2. Top management support in terms of commitment and resources (organizational factor)
* METHODS *
The authors developed a 42-item survey instrument which asked the respondents to rate their knowledge of Level 4 evaluations, the frequency of its use within their organizations, and their perception of each attribute. Survey participants were solicited by e-mailing a random sample of 800 members of the International Society for Performance Improvement (ISPI). Out of 720 validated email recipients of the ISPI members (80 = invalid email addresses), 274 completed the survey (38%). The authors found no significant differences in the demographics of the respondents vs. non-respondents, indicating a low response bias. They  also found no significant differences in the responses related to the demographics of the respondents in terms of their job types (e.g., training director, performance technologist, consultant, etc.), and thus analyzed the data as one group.
This survey expanded on earlier research on attributes and usage by adding the two organizational factors to the Level 4 attributes. The survey also collected qualitative data through two open-ended questions which asked respondents to identify the single most difficult HPT process to complete and to give their opinion about Level 4 evaluations.
The authors constructed an HPT Change framework based on the work of Rogers, Keller, Gilbert, and Rummler & Brache which presented five major interconnected influences on human performance:
1. Opportunity (defined in this study as the organizational support and commitment to Level 4 evaluation)
2. Capability (defined in this study as the knowledge and skills needed to conduct Level 4 evaluations)
3. Motivation (defined in this study as the perception and attitude of potential adopters and stakeholders toward the Level 4 evaluation process)
4. Collaboration (defined in this study as the level of stakeholder or user involvement in the Level 4 evaluation process)
5. Value (defined in this study as the frequency of Level 4 usage)
The qualitative data was coded based on this HPT Change framework, through which themes in the responses were discovered.
* FINDINGS *
The authors found that the respondents had a high degree of awareness regarding Level 4 evaluations; about 92% reported at least basic knowledge of it, with 58% reporting above average or excellent knowledge. Â However, only about 3% reported using Level 4 evaluations most of the time (more than 75% of the time) and only about 25% reported using Level 4 at least some of the time (less than 75% of the time). 43% reported that their organizations were considering or testing Level 4 evaluations but had not yet gotten to the point of full implementation, leaving 31% who were not using or even considering Level 4. These figures are consistent with the results of prior research (and later research).
When asked to rate each attribute on a scale of 1 to 5, with 5 indicating the most impact on successful implementation of Level 4, respondents rated the relative advantage of Level 4 the highest with an average score of 3.8, followed by trialability (3.6), compatibility (3.5), complexity (3.5), and observability (3.4). Both organizational factors were rated as having less perceived impact, with management support rated at 2.8 and stakeholder involvement at 2.7.
The authors also looked at the relationship between the five attributes, the two organizational factors (management support and stakeholder involvement), and the usage of Level 4. They hypothesized different models for interrelated impact but were unable to establish relationships using the data, and concluded that the relationships may exist but required further study to validate.
When asked which HPT process was the most difficult to complete, 30% chose evaluation, with implementation/change ranked second (21%) and performance analysis third (20%).
The themes which emerged from the qualitative data were the following:
1. Evaluation is a way of thinking
2. Leverage existing strengths and processes
3. Kirkpatrick Levels = Training (not perceived as a tool for evaluating non-training interventions)
4. Mixed Messages (confusing terminology, evaluation as an afterthought, evaluators not evaluating their own performance)
5. Rewards and Incentives (perception by stakeholders and management that evaluation either has no value of its own, or may decrease their own value by revealing errors and inefficiencies)
* DISCUSSION FOR IPT-N MEMBERS *
How would you rate the attributes the authors defined, in terms of their impact on your ability to complete Level 4 evaluations? Do you agree with the respondents that the attributes of the evaluation process itself have more impact than management support or stakeholder involvement?