Thesis summary

September 27th, 2012

Now that the final thesis has been approved by the Boise State University Graduate College and I’ve safely graduated, I’m ready to present the research results. The actual thesis will be available for download here after it has been uploaded to the electronic thesis library. As most sane people would prefer not to plow through a 142-page document, I’ll summarize it in plain English.

The really short version: we’re not evaluating training based on on-the-job behaviors and organizational outcomes because there’s not enough time, people, and/or management support. However, we’re also not entirely sure what Level 4 evaluations are.

The really long version: My research was a look at training professionals’ usage and understanding of Kirkpatrick’s Level 3 and Level 4 evaluations. I wanted to explore the factors which helped or hindered the performance of evaluations which examined on-the-job application of the knowledge/skills learned (Level 3) and the impact of training on forwarding organizational goals (Level 4). I’d like to thank those individuals who completed the research survey and am particularly grateful to the 22 individuals who allowed me to interview them for this project.

Before beginning my research, I looked at the results of similar surveys conducted by ASTD in 2009 and a doctoral student named Joe Pulichino in 2007. ASTD only asked if respondents had conducted evaluations at the different levels to any extent; Pulichino’s survey questions allowed the respondents to decide on their own interpretation of terms such as “sometimes” and “rarely”.

I tried to be more precise and thus asked respondents to pick a percentage for each level of evaluation. I know this must have been annoying for survey respondents, but “frequently” is a fuzzy word that could mean 50% or 75% or whatever else an individual interprets it to be. I divided my results into five intervals and grouped together the top three as my equivalent of sometimes (41-60%), frequently (61-80%), and almost always (81-100%). And thus we have the following frequencies for which each level of evaluation is conducted at least sometimes by the respondents:

    Level 1: 88.13%

    Level 2: 74.54%

    Level 3: 43.47%

    Level 4: 18.41%

Although the exact figures are lower than the figures ASTD and Pulichino generated, the trend is the same for all three surveys. Level 1 and Level 2 evaluations are commonly performed, Level 3 markedly less so, and Level 4 is a relatively rarity.

I asked respondents how sufficient the results of each level were for evaluating the effectiveness of training and how important it was to conduct each of the levels. Now here’s where things start to get interesting.

    Level 1: 52.54% said it was somewhat or very sufficient, 66.10% said it was somewhat or very important

    Level 2: 61.02% said it was somewhat or very sufficient, 94.92% said it was somewhat or very important

    Level 3: 59.32% said it was somewhat or very sufficient, 98.31% said it was somewhat or very important

    Level 4: 42.37% said it was somewhat or very sufficient, 88.14% said it was somewhat or very important

I had expected that Level 3 would have been perceived as decidedly more useful than any other level as it would examine to what extent learners were applying their new knowledge/skills on the job. Almost everyone thought Level 3 was important, but why did so many of those same people not consider it sufficient for evaluating training?

In retrospect, I should have phrased the question more clearly. I intended to ask if a particular level was useful as a component for evaluating training, and it might have been interpreted as asking if a particular level is all you need to do so. I wish I had asked the latter question, now that I think of it; in my opinion, if you’re limited to one shot at evaluation it should be a comprehensive version of level 3.

Why would Level 4 be perceived as the least useful type of training evaluation? I’d love to explore that question more. I suspect that the difficulty in linking organizational outcomes to the training is a major issue. There’s also the perception that Level 4 is irrelevant for many organizations or categories of training interventions, and that is a whole other issue that I’ll discuss in a separate post.

I asked if respondents perceived any sort of positive correlation between the levels, so that a positive result for Level 1 and 2 evaluations strongly indicated that you’d also get a positive result when evaluating at Levels 3 and 4. I bet a lot of respondents thought this was a ridiculous question. However, one of the academic criticisms of Kirkpatrick’s levels is that it leads training professionals to assume this positive correlation. 69.12% of the respondents did not perceive any such correlation.

So let’s get back to conducting evaluations. What factors had the strongest impact on one’s success or lack of success in conducting Level 3 and 4 evaluations? First, let’s look at the reasons why people wanted to evaluate training at those levels.

The top three reasons training professionals wanted to evaluate at Level 3 were to assess the relevance of training (70.54%), to demonstrate their own value to the organization (47.73%), and to look at issues with transfer of training (40.91%). For Level 4, the top three reasons were demonstrating their own value to the organization (70.0%), assessing the relevance of training (50.0%), and looking at the organization’s actions which supported or hindered training efforts and results (50.0%).

What factors within the training department or organization had the most impact on whether or not such evaluations were done? For both Level 3 and Level 4, the top three factors were access to the learners for post-training data collection, the importance placed on conducting the evaluation, and the department’s expertise/experience in evaluating.

What prevented training departments from conducting Level 3 and 4 evaluations? For both levels, the biggest factors were a lack of resources (such as time and budget), lack of expertise, and lack of post-training access to learners also having a notable effect. Level 3 evaluations were more likely to be hindered by a lack of support from organizational management than were Level 4 evaluations; this may be because those lacking organizational support for post-training evaluation would not even consider attempting a Level 4.

Respondents had the opportunity to include a free response explanation of what they perceived to be the most important factors in facilitating and obstructing their attempts to evaluate training. I coded this qualitative data and found that support from organizational management for conducting evaluations was the most important factor, both in helping and in hindering such evaluations. The second most important facilitating factor was access to data needed for evaluation, while the second most important obstructing factor was a lack of resources followed closely by a lack of importance placed on evaluation by the organization.

After the survey, I interviewed 22 of the respondents to collect qualitative data in hopes of putting the numbers in context. I used Gilbert’s Behavior Engineering Model (BEM) to classify interview responses into organizational-level and individual-level factors. As expected, many of the comments fell into the areas of organizational data (example: the organization’s perception of the importance of training evaluation) and organizational resources (example: sufficient time and personnel available to conduct evaluations of training).

The role of organizational perception may be the single most important factor in the success or failure in conducting Level 3 and 4 evaluations. The other types of factors are likely to be a result of this perception. If the organizational management does not support evaluation, it will not make the necessary resources available nor will it provide any incentives for conducting – or cooperating with – evaluation efforts. Such an organization will not recruit training professionals skilled in evaluation, and those already within the organization may leave (taking their skills with them) or become frustrated and demotivated or simply resigned.

I ended the data section of my thesis with a comparison between two interviewees and how organizational support affected their efforts. The two were fairly comparable in how I perceived their levels of knowledge, charisma, and ambition. One worked for an organization which placed a very strong value on evaluation and data collection; this person had a wealth of data available for analyzing the efficiency and quality of the training programs in place, plus support for trying new approaches. The other worked for an organization which saw no value in evaluating training and would not approve any active pursuit of the necessary data; this person tried to demonstrate the value to the organization, was rebuffed multiple times, and left the company shortly after our interview.

None of these findings about factors were surprising to anyone in the field, and they tied in tidily with Gilbert’s belief that organizational-level data and resources were by far the most important factors in any workplace performance issue. As the interviews went along, however, I noticed a third critical factor emerging: individual knowledge. The issue was not a lack of expertise or experience, which is what you would expect given the complexity of Level 4 evaluations. Instead, it was the interpretation of what Level 4 is.

Kirkpatrick first presented his Four Levels in a series of articles published between November 1959 and February 1960 in the Journal for the American Society of Training Directors (now T&D, a publication of ASTD). He defined Level 4 as “the measure of final results which occur as a result of training, including increased sales, higher productivity, bigger profits, reduced costs, less employee turnover, and improved quality.” He acknowledged the difficulty in evaluating outcomes which were not readily quantifiable, and of linking such outcomes to a training program; in such cases, he suggested limiting evaluation to the other three levels. It seems that a common interpretation of Level 4 has become not the final results of training, but the final numerical and financial results of training and the organization’s return on investment for the training. If the long-term training goals aren’t things like increased sales or bigger profits, or if you don’t need to justify the investment in training because the organization will allocate resources regardless of the ROI, is Level 4 still relevant?

Well, yes. Even if the training is required by policy or does not directly affect organizational income, it makes sense to verify that the training did what it was supposed to do for the organization. At Level 3, we look at what the training did for the learners – are they performing a set of skills or competencies to the degree defined as “success”? At Level 4, we look at the organizational goal which prompted the training – did the success of the learners in performing those skills or competencies achieve the organizational goal?

In 1998, McLinden and Trochim wrote an article for Performance Improvement introducing the concept of “return on expectations” and their framework for setting expectations for training and then evaluating how well those expectations were met. Recently, Kirkpatrick Partners have been promoting the ROE concept as the new and improved definition of Level 4. Several interviewees were familiar with the term and thought it was a good idea, but did not connect ROE to Level 4. So, there’s the connection. ROE means working with the stakeholders for your training, setting the expectations (intended outcomes which meet the stakeholders’ goals), and then measuring training results versus training expectations. That’s Level 4. (however, don’t use the acronym ROE around the MBAs in your organization as it translates to “return on equity” in MBA-speak and they’ll wonder why you’re referring to profitability and investors)

Return on investment for training is still important in many contexts, but Phillips presented it as a new fifth level rather than a definition for the fourth.

One of my conclusions is that this misinterpretation of Level 4 as strictly financial/numerical has a strong impact on organizational support for Level 4 evaluations. If you define Level 4 as strictly measurable output that affects income (sales, productivity, manufacturing defects, etc), you cannot effectively present a case for evaluating training that does not directly affect income. Such training should still accomplish its purpose, however. I must note that training professionals often do not have the opportunity to discuss such organizational goals with the stakeholders; many organizations see the training department as an “order taker” that functions only to fulfill training requests without first determining whether training is even the right option for the situation.

Thoughts on Methodology

If you were just interested in the results, you can stop reading now! For those curious about how I went about conducting the research, carry on.

This research project was approved by Boise State University’s Institutional Review Board, which must approve any research that involves humans as subjects. For both the survey and interviews, I was required to make available a letter of informed consent explaining participants’ rights. The IRB had to approve the survey questions, interview questions, letters of informed consent, and even the text of the LinkedIn posting used to solicit participants. My research was classified as Expedited Review as it collected new data from human subjects but did not subject them to anything hazardous to physical or mental health. My own mental health during this process was beside the point!

My methodology was based on Brinkerhoff’s Success Case Method. The survey phase of the SCM helps you identify the extreme cases, defined as individuals who were the most and least successful at benefiting from a program of some sort. This is followed by open-ended interviews with those extreme cases with the intention of identifying why the best performers did so well and why the worst could not succeed. Although I borrowed this structure from the SCM, it would have been difficult to define degrees of success as I did not examine the beneficiaries of a single program. It would also have been difficult to select representative (typical) cases. About 48% of respondents had a master’s or doctoral degree in a field related to instructional design in some way, but this could have meant anything from a Ph.D in instructional systems technology to a M.Ed in elementary education. Meanwhile, nearly one-quarter of the respondents had no formal education in instructional design. With such a wide range of backgrounds, with only a professional function in common, I decided to do the only sensible thing and interview everyone who volunteered to do so. In a way this weakens the research because case selection is controlled by the subjects rather than the researcher, and is one reason why I would not consider my results truly transferable to other contexts. If I had a larger pool of participants, I might have tried to develop the extreme case selection, but I think that would have required a more focused survey.

I did not perform a statistical analysis on the survey data, although I had originally intended to do so, due to the low number of complete responses (68). I’ve certainly read plenty of academic research which included analysis on much smaller data sets, but I didn’t feel comfortable doing so for the thesis. I plan to teach myself R as my student license for SPSS has expired, so perhaps I’ll run my survey data as practice. What I’d like to see is the correlation between formal education in evaluation methods and the success in conducting Levels 3 and 4.

I asked each survey respondent to self-select as a success case (had conducted at least one evaluation and presented the results to the stakeholders), a non-success (had attempted at least one evaluation but could not complete it for whatever reason), or a non-evaluator (had never attempted an evaluation) for Level 3 and for Level 4. The interview subjects were assigned numbers based on interview order and codes based on their self-selection. Interviewee 6SC3NE4 would have been the sixth person interviewed and had classified himself or herself as a success case for Level 3 evaluations and a non-evaluator for Level 4.

ASTD and Pulichino were able to gather much more quantitative data through their surveys than I could. However, neither collected qualitative data. What do the numbers mean in context? One person’s idea of successful evaluation is another’s definition of inadequate and inconclusive. One month after each of his training programs, Bob sends out a 5-question survey to the learners and evaluates his programs based solely on the answers of the few that respond. Beth evaluates only a small selection of mission-critical programs, but does so by triangulating data from surveys, interviews, and observations of the learners, their supervisors, and their clients. If you ask each of them how frequently they evaluate training at Level 3, Bob would answer “almost always” and Beth “seldom”. Which one is the successful evaluator?

I interviewed 22 individuals for this research project, all of whom were involved in organizational training in some capacity. What I had hoped to find was a “magic bullet”, some critical factor that determined the fate of one’s evaluation attempts. There is one – get hired by the right company with the right management. Simple, right? Anyway, the interviews were scheduled for about 15-20 minutes each but I didn’t discourage individuals from continuing to talk if they had more to say. The shortest interview was 15 minutes, and the longest was somewhere over an hour. For each interview I had a semi-structured script so I could cover the relevant topics, but it was often valuable to let the conversation wander a bit.

My research questions were as follows:

Research Question 1:
With what frequency do training professionals conduct Level 3 and/or Level 4 evaluations for their organizations?

    Sub-question 1a: Who are the stakeholders for these evaluations?
    Sub-question 1b: For what reasons do training professionals conduct or attempt to conduct the evaluations?

Research Question 2:
What factors act as facilitators or barriers to conducting Level 3 and Level 4 training evaluations?

    Sub-question 2a: For Success Cases, what are the facilitating factors and the barriers, and how did they impact the performance of evaluations?
    Sub-question 2b: For Non-Success Cases, what are the facilitating factors and the barriers, and how did they impact the attempts to perform evaluations?
    Sub-question 2c: For Non-Evaluators, what are the facilitating factors and the barriers, and why were evaluations not attempted?

In retrospect, I asked the wrong questions. To be fair, I didn’t realize this until the interviews started and I noticed the varying interpretations of Level 4. What I should have asked is “how do training professionals interpret the purpose of Level 3 and Level 4 evaluations, and how do they define successful evaluations?”

by w4l3XzY3

May 8th, 2012

by w4l3XzY3

Research: Motivation, Employee Engagement, and Gilbert

April 29th, 2012

Introduction: In Fall 2011 I took a Reading & Conference with Dr. Donald Stepich, the theme of which was “Evidence-Based Practice”. Meanwhile, part of my graduate assistant duties for Dr. Donald Winiecki that semester was to help him explore the roots of Human Performance Technology, which were poorly documented. One of the key models in use by HPT practitioners is Gilbert’s Behavior Engineering Model; despite Gilbert’s insistence that he worked scientifically, he did not cite research to support his claims. The BEM “makes sense” but as yet no one appears to have linked extant research or conducted new research which would validate it.

This paper was my first attempt to compare what Gilbert asserted to what current research indicates. Gilbert’s attitude towards psychological research was somewhat hostile, despite (or because?) the fact that he had earned a Ph.D in psychology. He wrote with disdain about the “cult” of behavior which focused so heavily on motivation, which he felt was a minor factor in workforce performance issues. Meanwhile, modern researchers in organizational behavior and human resource development continue to focus on worker motivation. I decided to look at the hot topic of “employee engagement”. Do organizational efforts to influence workers’ intrinsic motivation have any measurable effect on their work performance?

(short answer: Organizations can influence how workers perceive them, which may lead to better on-the-job performance. Gilbert and current research agree that organizations cannot directly change that perception. Gilbert dismissed such efforts as a waste of resources, while research indicates that the indirect payoff may be positive. The jury is still out on this topic.)

Is Individual Motivation Relevant to Performance?

Introduction

In discussing the components of his Behavior Engineering Model, Gilbert (2007) declared that the individual instrumentation (capacity) and individual motivation (motive) components were of lesser importance than other components when analyzing performance issues:

The two causes of poor performance most commonly espoused are motives (“they don’t care”) and capacity (“they’re too dumb”). But these are usually the last two places one should look for causes of incompetence, simply because they rarely are the substantial problem. (p. 89)

Gilbert (2007) noted that people normally care about their job performance without any prompting from external influences and that improving environmental motivation (incentives) “can usually obliterate all evidence of defective motives” (p. 89). Gilbert did not cite any research studies, organizational psychology theories, management science theories, or other evidence to back up his assertions.

Are intrinsic motives truly unimportant to job performance? Kovach (1995) surveyed employees on what job rewards they desired the most, and at the same time surveyed supervisors on what they perceived to be the job rewards that their employees desired the most. The top three choices by employees were interesting work, full appreciation of work done, and the feeling of being “in” on things; all three rewards are the employee’s perceptions of how well he or she is individually valued by the organization. The supervisors, however, believed that their employees felt most rewarded by good wages, job security, and potential for growth in the organization; all three factors are organizational behaviors that are generally not targeted at any specific individual. Around this time, the human resources field had taken hold of a concept called employee engagement, defined as the level of enthusiasm and interest that an employee feels about his or her job and employer. The idea was that if employee felt valued by the organization, he or she would reciprocate that value by developing an emotional attachment to the job that would in turn make the employee desire to give back to the organization through a high level of job performance.

Was Gilbert correct in believing that there is no significant link between employee motives and performance, or is there research evidence that indicates that such a link exists and that efforts towards employee engagement have a positive and measurable effect on workplace performance?

Gilbert and Motivation

Thomas Gilbert (2007) took a dim view of motivation, calling it a sub cult that was “perhaps the most pernicious of the enemy’s agents” (p. 9), and claiming that there was no evidence that any behavioral professional who focused on motivation ever proved any measurable results. He disdained motivation as being a waste of the energies of the performance engineer, which was only worth discussing in order to dismiss it:

There is more nonsense, superstition, and plain self-deception about the subject of motivation than about any other topic. And because motivation is a favorite nostrum offered as a curative for incompetence, the nature and causes of this nonsense require some examination.  (p. 308)

He stated that in an actual performance analysis, attempts to improve motives would have little success due to lack of leverage to change that component. In his opinion, monetary rewards were the only truly effective motivator, yet performance engineers “shrink from any suggestion of changing the way they pay people for their performance” (p. 309) lest they offend their clients. He conceded that workers might respond positively to non-monetary incentives such as “patting them on the back” (p. 309), but asserted that only money mattered.

Instead, he believed that by examining and improving a job’s incentives, an employer would create a self-selecting process where only those who find those incentives acceptable will take that job. He dismissed inspirational leadership as just a temporary measure, and regarded any focus on individual motives as lazy management:

The greatest effect we can have on people’s motives comes through indirect means, by improving the environment: improving incentives, information, work tools, and the assignment of greater responsibility or the redesign of the job. The direct manipulation of people’s motives seldom has important leverage. In fact, people who are directly trying to improve the attitudes of employees and students are usually looking in the direction of the least leverage. (p. 169)

In his view, attempting to directly influence an employee’s intrinsic motivation was a waste of effort. Through improving the environmental components of a job, however, an employer would indirect influence the employee’s motives.

Employee Engagement

The definition of engaged employees was first put forth by Kahn (1990) as those who “employ and express themselves physically, cognitively, and emotionally during role performances” (p. 694). Employees are engaged if they feel a high level of involvement in their jobs, express enthusiasm when performing their jobs, and maintain interest in what their jobs entail. Saks (2006) divided the concept of engagement into job engagement and organization engagement; an individual may feel a strong sense of commitment to his or her specific job without feeling an equivalent commitment to the organization, or the reverse may be the case.

Macey and Schneider (2008) found that there was as yet no universal definition of employee engagement. They compiled their own definition by synthesizing the existing literature, calling it a psychological state that “connotes involvement, commitment, passion, enthusiasm, focused effort, and energy” (p. 2).

Rich, Lepine, and Crawford (2010) hypothesized that three factors operated together to influence employee engagement. The first is value congruence, which is the ability to find personal meaning in one’s work. Employees who can interpret the purpose of their job and/or their organization as meaningful, whether it is on an individual level (such as feeling fulfillment through tackling intellectual challenges) or on a societal level (such as working at a non-profit organization dedicated to preserving farmland or educating children about nutrition). The second is core self-evaluation, which is one’s sense of confidence in one’s skills and knowledge. The third is perceived organizational support, which is one’s sense that he or she can devote time and energy to the job and receive a reciprocal level of effort back from the organization.

The Effect of Engagement on Performance

Harter, Schmidt, and Hayes (2002) developed a hypothesis that the employee satisfaction and engagement levels within a business unit have a positive impact on the outcomes of that business unit, including productivity, profit, and customer satisfaction. To examine this, they surveyed employees in a broad range of organizations across multiple industries, and then triangulated the survey data with extant data from the employees’ supervisors and business units. Their results showed that there was indeed a positive correlation between business outcomes and both employee satisfaction and employee engagement. The level of employee engagement correlated most strongly with business outcomes for customer satisfaction, unit profitability, unit productivity, and employee safety. The authors suggested that organizations seeking to improve overall performance should explore the overall level of employee engagement in its best performing units; the organization can identify the organizational elements that influenced those levels and then use that information to take steps to increase engagement in other business units.

Parker, Baltes, Young, Huff, Altmann, LaCost, and Roberts (2003) concluded that there is a reliable relationship between psychological climate and individual outcomes such as motivation and job satisfaction. The relationship is stronger between psychological climate and work attitudes related to organizational behavior, such as job satisfaction, than with factors more internal to the employee, such as motivation. They further concluded that work attitudes influenced the relationship between psychological climate and motivation, and thus organizational behavior would indirectly influence employees’ motivations by directly influencing work attitudes. The authors believed that their findings were generalizable, and that they supported the idea that psychological climate had a strong impact on employee attitude that in turn may impact employee performance and thus organizational performance. The authors suggested that an organization concerned with improving employee well-being and retention levels should examine the psychological climate created by its behaviors.

Rich et al. (2010) hypothesized a direct relationship between job engagement and job performance. They tested this hypothesis by surveying a group of workers and validated that self-reported data with supervisory reports on performance, and found that this sample did show a positive correlation between engagement and performance.

The Effect of the Organization on Engagement Levels

The analysis of Parker et al. (2003) found several studies which found that employee perception of the workplace psychological environment was influenced by organizational characteristics, and this perception in turn influenced performance outcomes, exhibited behaviors, and likelihood of staying in or leaving the job or organization. For example, if a supervisor implements new attendance guidelines for all employees in his department, the employees may perceive this as a lack of trust demonstrated by the organization (represented by the supervisor), and will in turn lose some level of trust in the organization.

Macey and Schneider (2008) concurred, stating that the origins of the behaviors and attitudes which exhibit the level of employee engagement “are located in conditions under which people work, and the consequences are thought to be of value to organizational effectiveness” (p. 1).

Bakker and Demerouti (2007) developed the Job Demands-Resources model that predicts employee well being by examining the relative impact of job demands, which are activities that increase stress on the employee, and job resources, which decrease stress on the employee. Job resources are tangible and intangible factors, which help employees achieve their work goals, promote employee learning and growth, or function to reduce job demands. Job demands are tangible and intangible factors, which require sustained physical, cognitive, or emotional effort from the employee. The premise is that if job demands outweigh job resources, the employee will feel strain at the workplace, which may in turn lead to less enthusiasm, less energy in performing duties, and less commitment to the job. If job resources outweigh job demands, the employee will feel more physically and mentally comfortable in the workplace, which may in turn lead to more enthusiasm and increased commitment to the job.

The study conducted by Rich et al. (2010) also hypothesized a link between organizational support and engagement, and discovered a strong correlation between these factors. They had also hypothesized links between engagement and both value congruence and core self-evaluation; they found a positive correlation here as well, but neither was as strongly linked to engagement as organizational support.

Saks (2006) found research data that showed a positive relationship between engagement and organizational factors like available resources and good social relationships with others in the organization. He stated that the social exchange theory, in which obligation is sustained through mutual interdependence, accounted for the relevance of these factors. Individuals who perceive positive commitments from the organization, through resources and support, are motivated to reciprocate by giving a positive commitment back through dedication to work. The more the individuals perceive that they benefit from the organization, the more they feel they are willing to give back to the organization. If the organization shows a lack of commitment by not providing positive resources, the individuals respond by reducing or withdrawing their level of response. Saks identified the antecedents to engagement as job characteristics and organizational behaviors, and the consequences of engagement as job satisfaction and commitment to the organization. When he conducted his own study, his data showed a positive relationship between those antecedents and engagement levels, and between engagement levels and the consequences. When he eliminated engagement and examined the relationship between the antecedents and consequences of engagement, he determined that the level of engagement did mediate the effect of antecedents on consequences. This suggests that employee levels of engagement affect how organizational factors (antecedents) result in employee outcomes (consequences).

Conclusion

According to Gilbert (2007), attempts to direct influence an employee’s intrinsic motives are wasted effort because motives cannot be directly influenced and they are of little consequence to employee performance anyway.

The concept of employee engagement is based on the belief that an organization can exhibit behaviors that positively influence an individual’s intrinsic motives by altering his or her perceptions about the organization.

There is still no extensive research into the effects of employee engagement on measurable job performance. The research so far suggests that Gilbert was incorrect; organizations can positively influence employees’ intrinsic motives through efforts directed at increasing employee engagement, and this may lead to improved performance on the job.

However, both Gilbert and the extant studies agree that an organization cannot directly change an individual’s motives. It can only influence the individual’s perception of the job and organization through its behaviors, whether that be through tangible environmental changes or intangible pats on the back.

 

References

Bakker, A. B. & Demerouti, E. (2007). The Job Demands-Resources model: state of the art. Journal of Managerial Psychology, 22(3), 309-328.

Gilbert, T. F. (2007). Human competence: Engineering worthy performance (Tribute ed.). San Francisco, CA: Pfeiffer.

Harter, J. K., Schmidt, F. L., & Hayes, T. L. (2002). Business-unit-level relationship between employee satisfaction, employee engagement, and business outcomes: A meta-analysis. Journal of Applied Psychology, 87(2), 268-279.

Kahn, W. A. (1990). Psychological conditions of personal engagement and disengagement at work. Academy of Management Journal, 33(4), 692-724.

Kovach, K. A. (1995), Employee motivation: Addressing a crucial factor in your organization’s performance. Employment Relations Today, 22(2), 93–107.

Macey, W. H., & Schneider, B. (2008). The meaning of employee engagement. Industrial and Organizational Psychology: Perspectives on Science and Practice, 1(1), 3-30.

Parker, C. P., Baltes, B. B., Young, S. A., Huff, J. W., Altmann, R. A., LaCost, H. A., & Roberts, J. E. (2003). Relationships between psychological climate perceptions and work outcomes: a meta-analytic review. Journal of Organizational Behavior, 24(4), 389-416.

Rich, B., Lepine, J. A., & Crawford, E. R. (2010). Job engagement: Antecedents and effects on job performance. Academy of Management Journal, 53(3), 617-635.

Saks, A. M. (2006). Antecedents and consequences of employee engagement. Journal of Managerial Psychology, 21(7), 600-619.

Research: Evidence-Based Practice

April 29th, 2012

Introduction: As part of my graduate assistant duties for Dr. Yonnie Chyung, I researched the topic of evidence-based practice (EBP). I looked at its origins in medical practice, its adoption by other fields, and how it might apply to the field of Human Performance Technology.  I was particularly interested in the barriers to implementation and how they might affect HPT professionals who wanted to introduce EBP into their practice. The following is my brief literature review on the subject.

Evidence-Based Practice

 

1. What is evidence-based practice?

Definition

Evidence-based practice (EBP) originated in the field of medicine. Sackett et al. (1996) defined it as the pairing of a practitioner’s clinical expertise with empirical evidence from clinical research in order to make the best decisions about a patient’s care. Management science has adapted EBP into evidence-based management (EBM) which incorporates research findings from organizational and behavioral sciences into organizational decision-making processes (Rousseau, 2006).

Evidence

Although empirical evidence from randomized control trials (RCTs) is considered the standard due to its quantifiable and objective nature, Rycroft-Malone, Gill, Seers, and Kitson (2004) maintained that contextual evidence collected through other methods was both credible and necessary to form a complete picture. The American Psychology Association (APA) (2005) defined three categories of credible evidence: research studies, practitioner experience, and situational context. Research is not limited to RCTs and other experimental studies; ethnographic research, case studies, and studies linking interventions to outcomes are also valid sources of data which retain contextual elements (APA 2005, Edwards 2004).

Principles

McKibbon & Wilczynski (2009) laid out the five steps to conducting evidence-based practice as defining a question, collecting the evidence, evaluating the evidence, integrating evidence into context to create a resolution, and evaluation of the resolution. Sackett et al. (1996) stressed the importance of integrating the evidence, as EBP is an adaptation of evidence to suit both the practitioner’s level of expertise and the uniqueness of each EBP research question.

 

2. Evidence-Based Practice in Use

Fields

The concept of evidence-based practice arose in clinical medicine and spread to closely-related fields such as nursing, dentistry, mental health, and public health (Trinder & Reynolds, 2001). Studies on the use of EBP have been conducted in fields such as physiotherapy (Turner & Whitfield, 1997), dental hygiene (Asadoorian, Hearson, Satyanarayana, & Ursel, 2010), occupational therapy (Bennett, et al., 2003), education (Pirrie, 2001), school libraries (Barron, 2003), public safety (Lum, Koper, and Telep, 2011), and social work (Webb, 2001).

Processes and Guidelines

EBP is still an emerging concept, with even the early-adopting fields of clinical medicine and nursing questioning its reliance on clinical data (Clarke, 1999; Seidel, 2011) and arguing its limitation (Nevo, I. & Slonim-Nevo, V. (2011).

The five steps to conducting evidence-based practice are defining a research question, collecting the evidence, evaluating the evidence, integrating evidence into context to create a resolution, and evaluation of the resolution (McKibbon & Wilczynski, 2009). Sackett et al. (1996) stressed the importance of integrating the evidence, as EBP is an adaptation of evidence to suit both the practitioner’s level of expertise and the uniqueness of each EBP research question. 

Relative Value of Evidence

Rice (2008) ranked sources of evidence by the consistency and universality of results, judging systematic reviews of RCTs to be the most valuable for use in EBP, followed by individual RCT studies and structured case studies. APA (2005) did not rank its evidence sources, but did recognize the limitations of each category in terms of the generalizability of results and the transportability of results into usable data for practitioners.

 

3. Limiting Factors of EBP Usage

Challenges

Each field presents its own unique challenges to implementing EBP. Business-oriented disciplines such as management science (Pfeffer & Sutton, 2006), human resource development (Gray, Iles, & Watson, 2011; Rynes, Giluk, & Brown, 2007), and industrial/organizational psychology (Briner & Rousseau, 2011; Thayer, Wildman, & Salas, 2011) all share the common challenge of a gap between the interests of academia and practitioners. In each case, the subjects of academia-based research may vary significantly from the topics written about in practitioner-oriented publications. Articles written for publication in peer-reviewed academic journals are not written with the practitioner in mind.

Other challenges to EBP typical in business-related fields include the quality of available evidence, (Rice, 2008; Thomas, 2006), the applicability of results from one context into another (Clark, 2006) and the limited number of systematically-conducted studies from which to draw relevant evidence (Pfeffer & Sutton, 2006).

Barriers

Barriers to implementing evidence-based practice can be categorized as environmental and individual.

Environmental barriers include authoritative organizational hierarchies which limit practitioners’ ability to base their decisions on research rather than traditional practice (Asadoorian, Hearson, Satyanarayana, & Ursel, 2010; Gambrill, 1999) and the lack of incentives to take the extra effort needed to conduct EBP (Thayer, Wildman, & Salas, 2011). External factors such as legal requirements for practice may also complicate using EBP (Hasson, Andersson, & Bejerholm, 2011).

The greatest environmental barrier to implementing EBP may be a lack of time and resources available for practitioners (Hannes, Pieters, Goedhuys, & Aertgeerts, 2010; McCluskey, 2003; Metcalfe et al., 2001) to search the available research and formulate interpretations.

Individual barriers will vary depending on the individual, but studies indicate that the lack of the skills required to conduct adequate research and synthesize the findings is a common problem. Such skills are usually found in individuals with higher levels of education, and those individuals are more likely to accept EBP because they are comfortable with the cognitive skills required (Aarons, 2004; McCluskey, 2003). However, when the higher education consists of a practitioner-oriented curriculum with less emphasis on research, holding a graduate degree does not necessarily correlate to familiarity with searching and comprehending academic research (McCluskey, 2003; Rynes, Giluk, & Brown, 2007).

4. Evidence-Based Practice and Human Performance Technology

Needs Assessment

The analytical nature of the needs assessment aspect of HPT practice call out for theoretical frameworks tested in practice in order to address the complexity of diagnosing organizational problems (Cho, Jo, Park, Kang, & Chen, 2011). However, the models in use by HPT practitioners were developed with little documentation from their creators about the theories or evidence upon which the models were structured; the creators were consultants in practice rather than pure academics, and drew primarily from their own client experiences (Rummler, 2007).

For now, HPT practitioners can draw upon extant research in management science, human resource development, organizational behavior, and industrial/organizational psychology in order to develop intervention strategies based on researched evidence.

Instructional Design

There is currently a broad array of research evidence available to instructional designers, as evidence-based practice has already been introduced into the education field (Pirrie, 2001) and additional research has been done on workplace-directed topics such as transfer of training (Hutchins, 2009) and learning environments (Hardré, 2008).

Change Management and Motivational Issues

The concept of change management, which provides structure to moving an organization from one state to another, draws from research and concepts from industrial/organizational psychology as well as management science and human resource development, particularly in the realm of acceptance and motivation (Dormant, 1999). Motivation in organizations has been among the most published topics in human resource development publications over the last three decades, so the evidence base is substantial (Deadrick & Gibson, 2009).

 

5. Conclusion

Among the principles of the International Society for Performance Improvement (ISPI)’s Code of Ethics is the statement that human performance technology (HPT) practitioners use validated practice concepts consistent with existing research and practice knowledge (ISPI, 2002). Despite the very real challenges to be faced in adopting evidence-based practice in human performance technology, it is an essential step in establishing the credibility of the discipline (Clark, 2006). Academics and practitioners must work together to develop both lines of research and adaptations of research findings which are accessible to practitioners. Systematically-documented case studies drawn from practitioner experience would both provide evidence, adding to the discipline’s knowledge base, and point towards new lines of research for academics to explore.

 

References

Aarons, G. A. (2004). Mental health provider attitudes toward adoption of evidence-based practice: The Evidence-Based Practice Attitude Scale (EBPAS). Mental Health Services Research, 6(2), 61-74.

American Psychological Association Presidential Task Force on Evidence-Based Practice. (2005). Draft policy statement on evidence-based practice in psychology. Washington, DC: American Psychological Association.

Asadoorian, J., Hearson, B., Satyanarayana, S., & Ursel, J. (2010). Evidence based practice in dental hygiene: Exploring the enhancers and barriers across disciplines. Canadian Journal of Dental Hygiene, 44(6), 271-271-276.

Barron, D.D. (2003). Evidence based practice and the school library media specialist. School Library Monthly, 20(4), 49-51.

Bennett, S., Tooth, L., McKenna, K., Rodger, S., Strong, J., Ziviani, J., Mickan, S., & Gibson, L. (2003). Perceptions of evidence-based practice: A survey of Australian occupational therapists. Australian Occupational Therapy Journal, 50(1), 13-22.

Briner, R. B., & Rousseau, D. M. (2011). Evidence-based I-O psychology: Not there yet. Industrial and Organizational Psychology:Perspectives on Science and Practice, 4(1), 3-22.

Cho, Y., Jo, S.J., Park, S., Kang, I., & Chen, Z. (2011). The current state of human performance technology: A citation network analysis of Performance Improvement Quarterly, 1988-2010. Performance Improvement Quarterly, 24(1), 69-95.

Clark, R.C. (2006) Evidence-based practice and the professionalization of human performance technology. In Pershing, J.A. (Ed.), Handbook of human performance technology: Principles, practices, and potential (3rd ed.)(pp. 873-898). San Francisco, CA: Pfeiffer.

Clarke, J. B. (1999). Evidence-based practice: A retrograde step? the importance of pluralism in evidence generation for the practice of health care. Journal of Clinical Nursing, 8(1), 89-89.

Deadrick, D.L. & Gibson, P.A. (2009). Revisiting the research-practice gap in HR: A longitudinal analysis. Human Resource Management Review, 19(2), 144-153.

Dormant, D. (1999). Implementing human performance technology in organizations. In H. Stolovitch & E. Keeps (Eds.), Handbook of human performance technology (1st ed., pp. 237-259). San Francisco, CA: Jossey-Bass/Pfeiffer.

Edwards, D. Dattilio, F. & Bromley, D. (2004). Developing evidence-based practice: The role of case based research. Professional Psychology: Research and Practice, 35 (6), 589-597.

Gambrill, E. E. (1999). Evidence-based practice: an alternative to authority-based practice. Families in Society: The Journal of Contemporary Social Services, 80(4), 341-350.

Gray, D. E., Iles, P., & Watson, S. (2011). Spanning the HRD academic-practitioner divide: Bridging the gap through mode 2 research. Journal of European Industrial Training, 35(3), 247-263.

Hannes, K., Pieters, G., Goedhuys, J., & Aertgeerts, B. (2010). Exploring barriers to the implementation of evidence-based practice in psychiatry to inform health policy: A focus group based study. Community Mental Health Journal, 46(5), 423-432.

Hardré, P. (2008). Designing effective learning environments for continuing education. Performance Improvement Quarterly, 14(3), 43-74.

Hasson, H., Andersson, M., & Bejerholm, U. (2011). Barriers in implementation of evidence-based practice. Journal of Health Organization and Management, 25(3), 332-345.

Hutchins, H.M. (2009). In the trainer’s voice: A study of training transfer practices. Performance Improvement Quarterly, 22(1), 69-93.

International Society for Performance Improvement. (2002). ISPI code of ethics. Retrieved from http://www.ispi.org/uploadedFiles/ISPI_Site/About_ISPI/About/Code-of-Ethics.pdf

Lum, C., Koper, C. S., & Telep, C. W. (2011). The evidence-based policing matrix. Journal of Experimental Criminology, 7(1), 4-26.

McKibbon, A. & Wilczynski, N. (2009) PDQ evidence-based principles and practice. Shelton, CT: People’s Medical Publishing House.

McCluskey, A. (2003). Occupational therapists report a low level of knowledge, skill and involvement in evidence-based practice. Australian Occupational Therapy Journal, 50(1), 3-12.

Metcalfe, C., Lewin, R., Wisher, S., Perry, S., Bannigan, K., & Klaber Moffett, J. Barriers to implementing the evidence base in four NHS therapies: Dietitians, occupational therapists, physiotherapists, speech and language therapists. Physiotherapy, 87(8), 433-441.

Nevo, I. & Slonim-Nevo, V. (2011). The myth of evidence-based practice: Towards evidence-informed practice. British Journal of Social Work, 41(6), 1176-1197.

Pfeffer, J., & Sutton, R. I. (2006). Evidence-based management. Harvard Business Review, 84(1), 62-74.

Pirrie, A. (2001). Evidence-based practice in education: The best medicine? British Journal of Educational Studies, 49(2), 124-136.

Rice, M.J. (2008). Evidence-based practice in psychiatric care: Defining levels of evidence. Journal of the American Psychiatric Nurses Association, 14(3), 181-187.

Rousseau, D. M. (2006). Is there such a thing as ‘evidence-based management’?. Academy of Management Review, 31(2), 256-269.

Rummler, G.A. (2007). The past is prologue: An eyewitness account of HPT. Performance Improvement, 46(10), 5-9.

Rycroft-Malone, J., Gill, H., Seers, K., & Kitson, A. (2004). An exploration of the factors that influence the implementation of evidence into practice. Journal of Clinical Nursing, 13(8), 913-924.

Rynes, S.L., Giluk, T.L., & Brown, K.G. (2007) The very separate worlds of academic and practitioner periodicals in human resource management: Implications for evidence-based management.  Academy of Management Journal, 50(5), 987–1008.

Sackett, D. L., Rosenberg, W. M. C., Muir Gray, ,J.A., Haynes, R. B., & Richardson, W. S. (1996). Evidence based medicine: What it is and what it isn’t. British Medical Journal, 312(7023), 71-71.

Seidl, K. L. (2011). To EBP or not to EBP … why is it a question? Bariatric Nursing and Surgical Patient Care, 6(2), 53-53-54.

Thayer, A. L., Wildman, J. L., & Salas, E. (2011). I–O psychology: We have the evidence; we just don’t use it (or care to). Industrial and Organizational Psychology, 4(1). 32–35.

Thomas, M. N. (2006). Evidence-practice partnership. Performance Improvement, 45 (6), 8-12.

Trinder, L. & Reynolds, S. (2001) Evidence-based practice: A critical appraisal (2nd ed.). Malden, MA: Blackwell Science, Inc..

Turner, P. & Whitfield, T.W.A. (1997). Physiotherapists’ use of evidence based practice: A cross-national study. Physiotherapy Research International, 2(1), 17-29.

Webb, S.A. (2001). Some considerations on the validity of evidence-based practice in social work. British Journal of Social Work, 31(1), 57-79

 

WORC: Leader-Member Exchange and Attributional Comparisons

April 29th, 2012

*ARTICLE *
Campbell, C. R., & Swift, C. (2006). Attributional comparisons across biases and leader-member exchange status. Journal of Managerial Issues, 18(3), 393-408.

* SYNOPSIS *
How does the relationship between worker and manager affect the worker’s perception of the reasons for his or her level of workplace performance?

Leader-member exchange (LMX) is the theory that supervisors develop different levels of relationships with their subordinates. In-group members are those with a closer relationship to the leader, and are assigned higher levels of responsibility, trust, and access by the leader. Out-group members are those who lack this close relationship, and who function at lower levels of responsibility, trust, and access within this workgroup. One possible cause for this difference in relationship strength is the attraction-selection-attrition (ASA) theory, which assumes that individuals with similar psychological attributes think in better alignment with each other; those who are not in psychological alignment with the leader may pull back or leave altogether due to a lesser feeling of “fitting in”. In other words, in-group members may have that status because they have greater psychological similarity to their supervisor than do other subordinates.

Attribution theory states that individuals attempt to find causal relationships which explain their own experiences and the experiences or behavior of others. The creation of these perceived causal linkages may be formed through a bias, or preconception. Individuals viewing their experiences under ‘a self-serving bias’ will perceive a link between positive experiences and their own internal qualities, but attribute the cause of negative experiences to external factors. An individual operating under ‘an actor-observer bias’ links his or her actions to external factors, but perceives the actions of others as caused by their internal qualities.

Under either bias, a worker who performs poorly will attribute this lack of personal success to external factors. What happens when a worker performs well? Under the self-serving bias, the worker would attribute this success to his abilities and actions. However, under an actor-observer bias, the worker would attribute this success to external factors such as leadership support and organizational resources. The authors sought to resolve this contradiction by looking for relationships between a subordinate’s LMX status, how the subordinate perceives the causes of success and failure, and how his or her supervisor perceives the subordinate’s causes for success and failure.

* RESEARCH QUESTIONS *
Based on their review of the existing research, the authors stated eight hypotheses:

HI: When performance is positive, supervisors will make greater internal than external attributions for in-group subordinate performance.
H2: When performance is negative, supervisors will make greater external than internal attributions for in-group subordinate performance.
H3: In-group subordinates will make greater internal than external attributions for their own positive performance.
H4: In-group subordinates will make greater external than internal attributions for their own negative performance.
H5: When performance is positive, supervisors will make greater internal than external attributions for out-group subordinate performance.
H6: When performance is negative, supervisors will make greater internal than external attributions for out-group subordinate performance.
H7: Out-group subordinates will make greater external than internal attributions for their own positive performance.
H8: Out-group subordinates will make greater external than internal attributions for their own negative performance.

The authors believed that supervisors and in-group subordinates would attribute in-group subordinate performance to the same causes, while supervisors and out-group subordinates would attribute out-group performance to opposite causes.

* METHODS *
The authors mailed printed surveys to employees of a U.S. regional division of a large organization in the retail industry. Responses were received from 229 subordinate employees and 51 supervisors of those subordinates; the supervisors rated up to three subordinates each, so there were 135 unique supervisor surveys completed.

Subordinates rated their LMX status using a validated scale (LMX-7), while supervisors rated their subordinates’ LMX status using a parallel scale (SMLX-7). Subordinates were then given two retail scenarios, one with a positive outcome and the other with a negative outcome, and were asked to imagine and then evaluate how they would perform in these situations. Subordinates then rated how their performance in each scenario was affected by ability, effort, ease of the task, and luck. Supervisors were asked to imagine and then evaluate how each of their subordinates would perform in these situations, and then rate how each subordinate’s performance was affected by ability, effort, ease of the task, and luck.

The authors found no significant difference between how subordinates rated their LMX status and how their supervisors rated their LMX status. Subordinates were categorized as in-group, out-group, and a middle-group; data related to the middle-group subordinates was eliminated from further analysis.

* FINDINGS *
The statistical analysis of the data supported the first two hypotheses, with positive performance by in-group subordinates attributed to internal factors (ability and effort). The third hypothesis, that supervisors perceived external factors (ease of task and luck) as the cause of negative performance of in-group subordinates, was also supported by the data. However, the fourth hypothesis was not supported by the data. The in-group subordinates did not attribute their negative performance to external factors, as was hypothesized; their responses showed no significant difference between internal and external factors as the cause of negative performance.

For out-group subordinates, supervisors did not perceive any significant difference between internal and external factors as the cause for positive performance. The authors noted that this implied that supervisors do not consider the abilities and efforts of out-group members to be any greater a factor for success than external factors, which indicates that the supervisors place less value on the abilities and effort of those out-group members. However, the authors’ sixth hypothesis, in which supervisors blamed the negative performance of out-group members on internal factors, was supported by the data. The seventh hypothesis was not supported by the data while the eighth hypothesis was. The authors had assumed that out-group members would see their performance through the actor-observer bias (opposite of the self-serving bias expected of the supervisors) and thus attribute their performance, both positive and negative, to external factors. However, the data indicated that out-group members, like in-group members, attributed positive performance to internal factors.

These results support prior research which indicated that supervisors give in-group subordinates credit for their abilities and effort when performance is positive, and place the blame elsewhere when in-group subordinates perform poorly. This study also indicates that supervisors do blame out-group subordinates for poor performance, and do not give them credit for their abilities and efforts even when they perform well.

* DISCUSSION FOR IPT-N MEMBERS *
What implications might the leader-member exchange (LMX) theory and the results of this study have for the data you obtain from a cause analysis for a performance issue? Do you think a manager would be more likely to blame internal factors such as  a knowledge or skill deficiency, capacity, or motives for poor performance when the issue is primarily among the out-group? Would the manager be more likely to accept an environmental (external) factor as the cause when his or her in-group is equally affected by the performance problem?

WORC: Perceptual Learning Styles

April 29th, 2012

*ARTICLE *
Krätzig, G. P., & Arbuthnott, K. D. (2006). Perceptual learning style and learning proficiency: A test of the hypothesis. Journal Of Educational Psychology, 98(1), 238-246.

* SYNOPSIS *
Some educators have embraced the concept of VAK learning styles. According to this concept, each individual prefers to receive new information when presented in one of three formats: visual, auditory, or kinesthetic (hands-on). The authors noted that many attempts have been made to develop instruments which accurately identify learning styles, which were intended to guide educators in tailoring their instructional methods to produce optimal results for each individual learner. The authors also noted that many of the instruments in use have not been validated through research. More critically, the authors found that there was no research to back up the assertion that educational programs geared towards a specific learning style resulted in improved retention of new information by those identified as possessing that learning style. The authors conducted one research study to look for evidence that learning styles had such an impact, followed by a second study which examined what factors prompted individuals to classify themselves as a specific learning style.

* STUDY 1 *
For the first research study, the authors looked for a correlation between learners’ self-assessed learning styles and their ability to recall information presented in each of the three styles. As research has suggested that the time of day also impacted memory, the authors included this as an additional study variable. Each of the 65 study participants first answered a single question asking them to describe themselves as visual, auditory, or kinesthetic learners; they could also describe themselves as preferring all three equally or not having a preference. After this, they completed the Barsch Learning Styles Inventory (BLSI), a commonly-used learning style self-assessment, and the Morningness-Eveningness Questionnaire (MEQ) which established their preferred times of the day.

All study participants completed three standardized and validated tests, each of which paired one type of presentation (visual, auditory, or kinesthetic) with a memory exercise; each test measured both immediate recall and delayed recall. The kinesthetic test produced two scores, one for time needed for recall and the other for accuracy of recall.

Participants’ learning styles were identified both by the participants’ own perceptions and the results of the BLSI; the authors found no significant correlation between perceived learning styles and instrument-identified learning styles, with perception matching measured styles for only 44.6% of the participants. The authors then analyzed the participants’ memory test scores in relation to their BLSI-identified learning styles and found no significant positive correlation between a learning style and its matching standardized test. The authors did find significant positive correlations between BLSI-identified kinesthetic learners and their scores on the visually-oriented test, participants’ performance on the visually-oriented test and performance on each aspect (time and accuracy) of the kinesthetic test, and the participants’ scores for kinesthetic time and kinesthetic accuracy. The authors also looked at the time-of-day preference, comparing the correlation between scores and identified learning styles of participants completing the tests at their preferred time of day against those doing so at a non-preferred time; they did not find any significant correlation between preferred time of day, identified learning styles, and level of performance in memory tests.

* STUDY 2 *
The results of the authors’ first study suggested that the BLSI might not accurately identify the learning style of an individual. They hypothesized that the problem may lie in the subjective nature of the self-assessment, and that individuals may choose their answers based on something other than their actual experiences as learners. Ten participants completed the BLSI, then were interviewed about their answers to better understand why they chose the answers they did.

The authors looked for themes in the qualitative data and found five categories into which participants’ statements about their reasoning fell: specific examples, general memories, preferences, self-efficacy, and habits and routines. The least frequent type of reasoning (6.3%) given for choosing answers on the BLSI was  specific examples of the participants’ learning experiences; in every such instance, the participant named only a single experience as evidence for that answer, leading the authors to assume that the answer was chosen solely because of that single experience. The frequency of basing answers on general memories, preferences, and self-efficacy were very similar (26.7%, 27.9%, and 28.3%, respectively), with the remaining 10.8% of participant answers based on their habits and routines. Further analysis of this qualitative data led the authors to conclude that participants remembered their prior experiences in terms of what happened during the experience, rather than in terms of how much they learned through those learning experiences or how much of the learning they retained. Statements related to self-efficacy, defined as self-confidence about one’s ability to perform at a high level, appeared to have some potential as an indicator of real learning performance but the authors could not prove any actual connection.

* FINDINGS *
In the first study, the authors could not find a strong correlation between identified learning styles and recall of learning resulting from a style-specific educational format. The second study investigated a possible reason for the results of the first study, which is that individuals may base their self-assessment on perceptions unrelated to actual results of prior learning experiences and thus are not answering the assessment tool accurately. However, even if an assessment tool is developed which improves the accuracy of learning style identification, the authors warn that there is still no clear evidence that catering to individual learning styles will make a significant improvement in learning performance.

* DISCUSSION FOR IPT-N MEMBERS *
When your colleagues say you should design instruction by catering to the learners’ dominant learning style, what would be your response? Do you know of any research that justifies that educational programs should cater to specific learning styles? When it comes to influencing learning outcomes, how important are the other factors such as contextual or cultural factors, compared to the the preference for a certain type of stimuli (visual, auditory, or kinesthetic)?

WORC: Engagement and Productivity

April 29th, 2012

*ARTICLE *
Harter, J. K., Schmidt, F. L., & Hayes, T. L. (2002). Business-unit-level relationship between employee satisfaction, employee engagement, and business outcomes: A meta-analysis. Journal of Applied Psychology, 87(2), 268-279.

* SYNOPSIS *
Numerous studies conducted in the last few decades had found a strong correlation between individual job satisfaction and individual performance measures. The authors believed that it was more accurate and relevant to gauge the aggregate level of job satisfaction within a business unit level when linking it to performance; individual job satisfaction is affected by many factors which may or may not be under the control of the organization, and which may or may not uniquely impact how the business unit or organization performs. Looking at the aggregate level of satisfaction smoothes out the variability and reduce measurement errors, and thus would produce more accurate information to be used by the organization.

The authors undertook a large-scale meta-analysis study to search for generalizable relationships which were valid across a broad range of organizational types.

*RESEARCH QUESTIONS*
The authors stated two hypotheses. Hypothesis 1 was that business-unit employee satisfaction and engagement has a positive relationship with business-unit outcomes including productivity, profit, customer satisfaction, employee turnover, and employee safety. Hypothesis 2 states that the correlations will generalize across organizations, with few exceptions, and could thus be applied almost universally.

*METHODS*
The authors conducted a meta-analysis of studies conducted by The Gallup Organization, using a survey instrument, the Gallup Workplace Audit, which was developed to measure employees’ perception of job satisfaction and the job characteristics which were linked to employee engagement. Data collected from 198,514 individual respondents representing 7,939 business units across 36 industries along with extant data were analyzed to reveal correlations between employee satisfaction and engagement and levels of productivity, profit, customer satisfaction, employee turnover, and employee safety.

*FINDINGS*
The meta-analysis showed a positive correlation between job satisfaction and employee engagement, and investigated business outcomes. The strongest correlations were found between employee satisfaction/engagement and employee turnover (the strongest), customer satisfaction, and safety. Correlations between employee satisfaction/engagement, and productivity and profitability were still positive but not as strong. The authors found that the results were consistent enough across this large and varied sample to suggest that this correlation can be generalized.

The authors suggested that an organization looking to improve its overall performance should first look at the top-performing business units within the organization. By examining the levels of employee engagement and job satisfaction within those units, and exploring the factors that influenced those levels, the organization can take steps to increase engagement and satisfaction in other units; if the authors are correct about the generalization of the correlations, doing so should result in improved business outcomes in those other units. They noted that organizations should take engagement and job satisfaction seriously due to its impact on business outcomes.

The authors also suggested that further study is warranted, particularly in examining the ‘causal’ link between engagement and short-term business outcomes like employee turnover and customer satisfaction levels (as correlation is not causation).

In larger business units, it would be difficult to accurately link the impact of a single employee’s engagement level and job satisfaction to eventual business outcomes. Taking the collective pulse of the unit’s employees allows management to focus on the overall situation and its impact, rather than be distracted by outliers which could skew their perceptions one way or another.

*IMPLICATIONS FOR HPT*
Thomas Gilbert (2007) defined in his Behavior Engineering Model, Motivation = Incentives + Motives. He believed that human motivation would be difficult to understand although others may characterize it as “a favorite nostrum offered as a curative for incompetence” (p. 308). He emphasized that motivation should be understood and engineered from two aspects of equal importance – environmental incentives (consequences) and personal motives. However, he also stated that in an actual performance analysis, attempts to improve ‘motives’ directly would have little success due to lack of leverage to change that component. Largely based on his behavioristic background, he believed that improving the environment components of a job would indirectly influence the motives of workers, although he did not cite any research to support his theory.

* DISCUSSION FOR IPT-N MEMBERS *
Have you measured employee engagement as part of a performance analysis? Have you attempted to improve employee engagement in hopes to improve employee retention, customer satisfaction, employee safety, productivity, and profitability, as shown in the meta-analysis study? What motivational strategies (addressing environmental incentives and/or personal motives) have you used to improve employee engagement?

*ADDITIONAL REFERENCE*
Gilbert, T. F. (2007). Human competence: Engineering worthy performance (Tribute ed.). San Francisco, CA: Pfeiffer.

WORC: Measuring Performance Improvement Interventions

April 29th, 2012

*ARTICLE *
Schaffer, S. P., & Keller, J. (2003). Measuring the results of performance improvement interventions. Performance Improvement Quarterly, 16(1), 73–92.

* SYNOPSIS *
Despite the general belief that human performance technology (HPT) practitioners should conduct results-based evaluations (Level 4) of their interventions, surveys of those practitioners have repeatedly shown that relatively few actually do so. The authors chose to examine how HPT practitioners viewed the attributes of a results-based evaluation, and whether there was a correlation between each attribute and the frequency with which results-based evaluations were conducted. The authors identified five attributes which related to the evaluation itself, plus two attributes linked to organizational factors.
.
* RESEARCH QUESTIONS *
The authors posed two research questions:

1. What is the frequency of use of results-oriented evaluation among ISPI professionals who have at least a basic awareness or knowledge of this type of evaluation?
2. What is the relationship between ISPI members’ frequency of use, perception of attributes, and organizational variables relative to Level 4 evaluation?

The evaluation process attributes selected by the authors were as follows:
1. The relative advantage of Level 4 compared to current practice
2. Trialability, or degree to which Level 4 may be experimented with on a trial basis
3. Compatibility of Level 4 with current organizational practices
4. Complexity of implementing Level 4 evaluation
5. Observability of results of Level 4 evaluation to others

The authors also included two organizational factors:
1. Stakeholder participation in adoption, adaptation, implementation (organizational factor)
2. Top management support in terms of commitment and resources (organizational factor)

* METHODS *
The authors developed a 42-item survey instrument which asked the respondents to rate their knowledge of Level 4 evaluations, the frequency of its use within their organizations, and their perception of each attribute. Survey participants were solicited by e-mailing a random sample of 800 members of the International Society for Performance Improvement (ISPI). Out of 720 validated email recipients of the ISPI members (80 = invalid email addresses), 274 completed the survey (38%). The authors found no significant differences in the demographics of the respondents vs. non-respondents, indicating a low response bias. They  also found no significant differences in the responses related to the demographics of the respondents in terms of their job types (e.g., training director, performance technologist, consultant, etc.), and thus analyzed the data as one group.

This survey expanded on earlier research on attributes and usage by adding the two organizational factors to the Level 4 attributes. The survey also collected qualitative data through two open-ended questions which asked respondents to identify the single most difficult HPT process to complete and to give their opinion about Level 4 evaluations.

The authors constructed an HPT Change framework based on the work of Rogers, Keller, Gilbert, and Rummler & Brache which presented five major interconnected influences on human performance:
1. Opportunity (defined in this study as the organizational support and commitment to Level 4 evaluation)
2. Capability (defined in this study as the knowledge and skills needed to conduct Level 4 evaluations)
3. Motivation (defined in this study as the perception and attitude of potential adopters and stakeholders toward the Level 4 evaluation process)
4. Collaboration (defined in this study as the level of stakeholder or user involvement in the Level 4 evaluation process)
5. Value (defined in this study as the frequency of Level 4 usage)

The qualitative data was coded based on this HPT Change framework, through which themes in the responses were discovered.

* FINDINGS *
The authors found that the respondents had a high degree of awareness regarding Level 4 evaluations; about 92% reported at least basic knowledge of it, with 58% reporting above average or excellent knowledge.  However, only about 3% reported using Level 4 evaluations most of the time (more than 75% of the time) and only about 25% reported using Level 4 at least some of the time (less than 75% of the time). 43% reported that their organizations were considering or testing Level 4 evaluations but had not yet gotten to the point of full implementation, leaving 31% who were not using or even considering Level 4. These figures are consistent with the results of prior research (and later research).

When asked to rate each attribute on a scale of 1 to 5, with 5 indicating the most impact on successful implementation of Level 4, respondents rated the relative advantage of Level 4 the highest with an average score of 3.8, followed by trialability (3.6), compatibility (3.5), complexity (3.5), and observability (3.4). Both organizational factors were rated as having less perceived impact, with management support rated at 2.8 and stakeholder involvement at 2.7.

The authors also looked at the relationship between the five attributes, the two organizational factors (management support and stakeholder involvement), and the usage of Level 4. They hypothesized different models for interrelated impact but were unable to establish relationships using the data, and concluded that the relationships may exist but required further study to validate.

When asked which HPT process was the most difficult to complete, 30% chose evaluation, with implementation/change ranked second (21%) and performance analysis third (20%).

The themes which emerged from the qualitative data were the following:
1. Evaluation is a way of thinking
2. Leverage existing strengths and processes
3. Kirkpatrick Levels = Training (not perceived as a tool for evaluating non-training interventions)
4. Mixed Messages (confusing terminology, evaluation as an afterthought, evaluators not evaluating their own performance)
5. Rewards and Incentives (perception by stakeholders and management that evaluation either has no value of its own, or may decrease their own value by revealing errors and inefficiencies)

* DISCUSSION FOR IPT-N MEMBERS *
How would you rate the attributes the authors defined, in terms of their impact on your ability to complete Level 4 evaluations? Do you agree with the respondents that the attributes of the evaluation process itself have more impact than management support or stakeholder involvement?

WORC: Self-Efficacy and Evaluation

April 29th, 2012

Introduction: Because this was meant for a practitioner audience, I avoided delving into the psychology behind predictive behavior and Bandura’s theory of self-efficacy. 

As I noted before, almost all training is evaluated at Level 1. The 2009 ASTD survey found that only 54.5% of training is evaluated at Level 3, even though 75% of the respondents placed a high or very high level on value on the results of such evaluations. In his 2007 doctoral dissertation on Level 3 and Level 4 evaluation use, Joseph Pulichino thought that control over data collection was a major factor. It’s easy to include end-of-class surveys and quizzes with which to gather Level 1 and 2 data, because the class attendees are still in the room or logged into the e-learning session. For Level 3 and 4 evaluations, you have to pursue the data with cooperation from others in the organization. I wonder how many of those Level 3 evaluations in the 54.5% were simply learners self-reporting (via a survey) on their performance, with no triangulation or other verification?

Lanigan appears to be proposing an alternative to Kirkpatrick’s Level 3 (behavior-based evaluation) based on the concept of predictive behaviors and the reality of problems in conducting “proper” behavior-based evaluations. I don’t know if I’d go that far, but I think including questions related to self-efficacy in a Level 1 survey will produce some helpful feedback to training developers. If you’re limited to Level 1 surveys, make them good surveys.

 

*ARTICLE *

Lanigan, M. L. (2008). Are self-efficacy instruments comparable to knowledge and skills tests in training evaluation settings? Performance Improvement Quarterly, 20(3-4), 97–112.

* SYNOPSIS *

Ajzen’s theory of planned behavior states that an individual’s actual behaviors can be predicted by that individual’s behavioral intentions. The most significant component of behavioral intentions is self-efficacy, which is the confidence the individual feels in his or her ability to perform a behavior successfully. The author, building on her previous research on the role of self-efficacy in evaluating training outcomes, investigated the relationship between self-efficacy scores and knowledge/skill test scores. Is there a correlation between a learner’s sense of confidence in his or her ability to use the new knowledge or skills and the learner’s tested level of knowledge or skills?

* RESEARCH QUESTIONS *
The author poses three primary research questions:
1. What is the relationship between self-efficacy and knowledge exam scores?
2. What is the relationship between self-efficacy and skills test scores?
3. What is the relationship between self-efficacy scores collected both pre- and post-training, and self-efficacy scores collected only post-training (but which also measure the learner’s pre-training level of self-efficacy)?

The goal of this research was to look for a direct relationship between a learner’s self-efficacy and that learner’s knowledge or skills test scores. Does the learner’s perception of his or her ability match the learner’s measured knowledge or skills? The author also looked at the timing of self-efficacy assessment. Is it better to measure self-efficacy before training begins, and then measure it again at the end of training to see how the learner’s confidence in understanding the course content has changed? Or is it better to ask the learners to assess their confidence levels at the end of training, and at that time ask them to rate how confident they were before training started as well as after training concluded?

* METHODS *
The study involved a class of 98 individuals in a certification program who attended a 4-hour workshop on how to write a cover letter. The author created three assessment instruments for this study. The first instrument measured the learner’s self-efficacy regarding the task of writing a cover letter. The second instrument was a skills test requiring the learner to write a cover letter based on given information. The third instrument measured the learner’s knowledge about the correct structure and content of a cover letter. The three instruments were administered in order before the training class started, and again at the conclusion of class. The self-efficacy assessment administered before class started only asked learners to measure their pre-training level of self-efficacy; the self-efficacy assessment administered post-training asked learners to rate both their pre-training and post-training levels.

* FINDINGS *
In this study, the author found a significant link between self-efficacy levels and knowledge scores, both pre-training and post-training. The link between self-efficacy and skills scores, both pre- and post-training, were also significant but not strongly so. There was a strong correlation between the learners’ self-efficacy scores measured pre-training and how they rated their pre-training self-efficacy after training was concluded.

From this, the author drew two main conclusions.

1. For training topics which are low-risk, evaluators may be able to substitute self-efficacy measures for knowledge or skill measures. Although the author did not discuss this further, it seems possible that self-efficacy assessment may be a better predictor of future behavior than knowledge quizzes when the training topic is behavior-oriented, such as customer service skills. The author acknowledges that for higher-risk training programs which require precise knowledge or skills, such as pilot or medical training, much better evidence of a link between self-efficacy and knowledge or skills would be needed in order to substitute self-efficacy assessment for knowledge/skills assessment.

2. Since there was no significant difference in how learners rated their level of pre-training self-efficacy when asked before training started or after it ended, the author concluded that this assessment could be given at either time. However, the author noted that this may be dependent on the training subject; learners may not understand more complex training objectives before class begins and thus may not yet be able to accurately rate their self-efficacy.

*IMPLICATIONS FOR HPT*
ASTD’s 2009 study on the usage of training evaluations found that 81% of the organizations in the study evaluated training at the knowledge level; this level of evaluation is usually done at the end of the program by the training department, which can collect and analyze this data. Only 55% evaluated their training programs by examining the actual post-training job behaviors of the learners; this level of evaluation requires more extensive data collection and a greater need for resources and organizational support which may not be available. By incorporating the concept of planned behavior into the evaluations conducted at the end of training programs, training departments could collect data which may indicate potential changes in the learners’ on-the-job behaviors. While this is no substitute for conducting a true behavior-based evaluation, it may give training departments some useful guidance on the effectiveness of their programs that they otherwise would lack.

* DISCUSSION FOR IPT-N MEMBERS *
Do you include questions about self-efficacy when you conduct reaction-level evaluations (Kirkpatrick level 1) of your training programs? Have you used self-efficacy assessment as an adequate substitute for knowledge or skills assessment? If so, what was the topic of the training program? About what should practitioners be careful when substituting self-efficacy assessment for knowledge or skills assessment?

WORC: Instructional Strategy Decisions

April 29th, 2012

Introduction: We talk a lot about transfer of training, and how support or non-support from managers and peers affects how well learners can apply their newly-acquired knowledge/skills in the workplace. This article turns the spotlight back on ourselves and our field. When we students with our andragogical theories and systemic thinking and freshly-printed diplomas go bouncing out into the real world of training & development, what happens with our own transfer of training? Are we using our newly-acquired knowledge, or do we fall in (willingly or otherwise) with what our new managers and peers practice?

 

*INTRODUCTION*

Students in instructional design programs are exposed to theories on adult learning and instructional design, and use those theories when creating strategies for class projects. What happens when they become practitioners? Do they continue to use the learning and instructional design theories they’ve learned in class, or do they come to rely on other strategies for developing training interventions?

*ARTICLE *

Christensen, T. K. and Osguthorpe, R. T. (2004). How do instructional-design practitioners make instructional-strategy decisions? Performance Improvement Quarterly, 17(3), 45–65.

* SYNOPSIS *

The researchers surveyed alumni from several graduate-level programs in instructional design to see how they made decisions about instructional strategy in actual practice.  As students, these individuals had been educated in adult learning theory and instructional design theory. The researchers wondered if the alumni continued to use their theoretical knowledge after graduation, or if they turned to other ways of deciding on appropriate strategies for effective learning interventions. If the latter were the case, this would indicate a possible gap between the theoretical strategies taught in instructional design programs and the practical strategies which practitioners find more suitable to actual projects.

* RESEARCH QUESTIONS *

The researchers created their survey to answer four questions about instructional design strategy:

  • How do instructional-design practitioners decide which instructional strategies to use?
  • What role does theory play in this process?
  • Where do ID practitioners learn about new instructional theories, trends, and strategies?
  • What is the predominant epistemology underlying current ID practice?

* METHODS *

The researchers conducted a web-based survey with alumni of graduate-level instructional design programs at five U.S. universities. To restrict their sample to those who were making instructional strategy decisions on a regular basis as part of their current jobs, the researchers asked the invited alumni to submit the survey if their current job description involved instructional strategies related decisions.  A total of 130 alumni responded to the survey, but 113 finished the survey.

The respondents were asked to rate how often they used each of the 12 design strategies listed in the survey (on a 5-point scale – Never, Almost Never, Sometimes, Often, and Very Often). They were then asked to list the instructional design theories and learning theories which they found useful; respondents were not given a list from which to choose, so it was up to them to decide what theories they used and how they should be categorized. Thirdly, the respondents identified which information sources they used to learn about new instructional theories, trends, and strategies. Finally, respondents were given three pairs of statements which reflected a contrast between objectivist and constructivist viewpoints on learning, and were asked to indicate where their own viewpoints fell between the two.

* FINDINGS *

Although the researchers did not necessarily intend to examine the roles played by the alumni in their organizations, the demographic data collected displayed a broad range of job titles; only 14% of the respondents held titles which included the phrase “instructional designer” or “instructional design”. The researchers felt this might indicate that instructional design skills are advantageous across multiple roles.

1. Instructional Strategy Decisions

The methods most regularly used by the respondents for making decisions about instructional strategy were to brainstorm with other people on the instructional design project (86%), derive strategy from prior experience (79%), and adapt strategies they have seen used successfully by other practitioners (74%). Despite the inclusion of learning theories and instructional design theories in their graduate studies, only 54% of respondents regularly drew on their knowledge of learning theories and 51% on their knowledge of instructional design theories.

2. Role of Theory

The researchers differentiated between learning theory, which explains why a certain instructional design could be effective, and instructional design theory, which offer guidelines for choosing instructional strategies and methods which would meet specific goals.

Survey participants were asked to name the learning theories and instructional design theories which they found particularly useful on their projects. For learning theories, the researchers grouped the respondents’ answers into broad categories. Only about half of the individuals surveyed responded to this set of questions.

Of the 59 individuals who named one or more instructional design theories which they used in practice, 36% mentioned Gagne’s Nine Events of Instruction, 27% mentioned Merrill’s theories, 20% referred to Dick & Carey’s model, and 17% to Keller’s ARCS motivational model.

Of the 56 individuals who named one or more learning theories which they used in practice, 46% noted constructivist theories (including situated learning and cognitive apprenticeship), 30% mentioned cognitive theories (including information processing theory and schema theory), and 30% listed theories which are properly classified as instructional design theory (including Gagne and Merrill’s theories).

3. Information Sources

81% of the survey respondents reported regularly learning about new instructional theories, trends, and strategies from interactions with peers and co-workers. About 50% of them reported doing so by reading instructional design books, visiting relevant Internet websites, and reading professional publications. Perhaps surprisingly, despite the high reported level of peer interaction for learning, much lower percentages of respondents reported learning this information through professional conferences (28%) and internet forums (19%).

4. Underlying Assumptions

The researchers found that in one pair of statements, 45% of the respondents were considered biased towards constructivist principles. For another pair of statements, 45% were considered biased towards objectivist principles. For the third pair of statements, the researchers found that the responses were evenly distributed between the two sides. From this, the researchers concluded that no clear bias towards objectivism or constructivism was evident in this group of respondents.

*CONCLUSIONS*

The researchers inferred by the survey results that practitioners are most likely to make their instructional strategy decisions through collaborative efforts with co-workers and peers, rather than in isolation. The researchers felt that educational programs for instructional design should ensure that students learn skills necessary for effective group work. They also concluded that more effort should be made to support peer interaction among instructional design practitioners as a way to enhance the spread of practical knowledge.

Although the surveyed practitioners did use theory to help develop instructional strategy, the results indicated that theory was just one component of that development. The researchers concluded that graduate programs should teach theory in context, so that future practitioners understand not just the theories but the application of theories to their real-world projects.

Regarding epistemology, the researchers concluded that instructional design practitioners pick and choose what suits the current purpose, rather than adhering strictly to objectivist or constructivist thinking. The researchers concluded that graduate programs should present a variety of philosophical approaches from which future practitioners can draw as needed.

* DISCUSSION FOR IPT-N MEMBERS *

For those of you who are instructional design practitioners, how do you develop your instructional strategies? How often do you find the theories and models learned in the IPT program useful in your work projects , or do you find that those theories don’t seem to apply to your actual practice?