(E)valuation of Training
and Development
7
Ridofranz/iStock/Thinkstock
Learning Objectives
After reading this chapter, you should be able to:
• Differentiate between formative and summative evaluations.
• Use Kirkpatrick’s four-level evaluation framework.
• Compute return on investment.
• Explain why evaluation is often neglected.
One of the great mistakes is to judge policies and programs by their intentions rather
than their results.
—Milton Friedman, Economist
Introduction
Chapter 7
Pretest
1. It is possible for organizations to try out trainings before they are launched.
a. true
b. false
2. Assessing whether trainees enjoyed training is important only as an evaluation of the
trainer’s competence.
a. true
b. false
3. Return on investment should be calculated after every training session to determine
whether it was cost-effective and benefited the company as a whole.
a. true
b. false
4. Fewer than 25% of organizations perform formal evaluations of training effectiveness.
a. true
b. false
5. Failure to evaluate trainings may be not only unprofessional but also unethical.
a. true
b. false
Answers can be found at the end of the chapter.
Introduction
We seek to answer one overarching question in the final, evaluation phase of ADDIE: Was
the training effective? (See Figure 7.1.) In particular, we assess whether we realized expected
training goals—as uncovered by our analysis phase—specifically, whether the trainees’ posttraining KSAs improve not only their performance, but also the organization’s performance.
As we will see, the process of training evaluation includes all of these issues, as well as deciding which data to use when evaluating training effectiveness, determining whether further
training is needed, and assessing whether the current training design needs improvement.
Ultimately, evaluation creates accountability, which is vital given the significant amount
organizations spend on training and developing employees—approximately $160 billion
annually (ASTD, 2013). This significant investment makes it imperative that organizations
know whether their training efforts yield a positive financial return on training investment (ROI).
Formative Evaluation
Chapter 7
Figure 7.1: ADDIE model: Evaluate
In this final phase of ADDIE, we evaluate how effective the training has been. From assessing
any improvement in the KSAs of the trainees to the financial return on the training
investment, the evaluation phase appraises the effectiveness of not only our prior analysis,
design, development, and implementation, but also of the training in totality.
Analyze
Design
Develop
Implement
Evaluate
7.1 Formative Evaluationf07.01_BUS375.ai
Although evaluation is the last phase of ADDIE, it is not the first time aspects of the training
program are evaluated. When it comes to training evaluation, we assess the training throughout all phases of ADDIE, using first what is known as a formative evaluation. Formative evaluation is done while the training is forming; that is, prior to the real-time implementation and
full-scale deployment of the training (Morrison, Ross, & Kalman, 2012). Think of formative
evaluation as a “try it and fix it” stage, an assessment of the internal processes of the training
to further refine the external training program before it is launched.
Formative evaluations are valuable because they can reveal deficiencies in the design, development, and implementation phases of the training that may need revision before real-time
execution (Neirotti & Paolucci, 2013; U.S. Department of Health and Human Services, 2013;
Wan, 2013).
Recall from Chapter 6 that formative evaluations can range from editorial reviews of the training and materials—which may include a routine proofread of the training materials to check
for misspelled words, incomplete sentences, or inappropriate images—to content reviews,
design reviews, and organizational reviews of the training (Larson & Lockee, 2013; Noe,
2012; Piskurich, 2010; Wan, 2013). So, for example, we may find in a content review that our
training is not properly linked to the original learning objectives. Or we may conclude during a design review that because e-learning is not a good fit with the organizational culture,
instructor-led training is a more appropriate choice.
Formative evaluations also encompass pilot testing and beta testing. With pilot tests and beta
tests, we are out to confirm the usability of the training, which includes assessing the effectiveness of the training materials and the quality of the activities (ASTD, 2013; Stolovitch &
Keeps, 2011; Wan, 2013). Both beta tests and pilot tests are considered types of formative
evaluation because they are performed as part of the prerelease of the training. For the pilot
and beta testing, selected employees and SMEs are chosen to test the training under normal,
everyday conditions; this approach is valuable because it allows us to pinpoint any remaining flaws and get feedback on particular training modules (Duggan, 2013; Piskurich, 2010;
Wan, 2013).
Summative Evaluation
Chapter 7
7.2 Summative Evaluation
Whereas formative evaluation focuses on the training processes, summative evaluation focuses
on the training outcomes—for both the learning and the performance results following the
training (ASTD, 2013; Piskurich, 2010; Wan, 2013). Summative evaluation is the focus of the E
phase of ADDIE. According to Stake (2004), one way to look at the difference between formative and summative evaluation is “when the cook tastes the soup, that’s formative evaluation;
when the guests taste the soup, that’s summative” (p. 17).
In summative evaluation, we assess whether the expected training goals were realized and,
specifically, whether the trainees’ posttraining KSAs improved their individual performance
(and, ultimately, improved the organization’s overall performance). As Figure 7.2 depicts,
in summative evaluation, we assess both the short-term learning-based outcomes—such
as the trainees’ reactions to the training and opinions about whether they actually learned
anything—and the long-term performance-based outcomes. These long-term performancebased outcomes include assessing whether a transfer of training occurred—that is, application to the workplace via behavior on the job—as well as whether any positive organizational
changes resulted, including return on investment (Noe, 2012; Phillips, 2003; Piskurich, 2010).
Figure 7.2: Summative evaluation’s short-term and long-term outcomes
Training evaluation can be broken down into short-term and long-term assessments. Shortterm evaluations are usually trainee focused, whereas long-term assessments are focused on
the training itself.
Summative
outcomes
Short-term
outcomes
Reactions of
learners
Long-term
outcomes
Learning by
participants
Behavior
on the job
Organizational
impact and Return
on Investment
f07.02_BUS375.ai
As Figure 7.3 depicts, however, the most
common assessments organizations perform with
summative evaluation are ultimately the least valuable to them (ASTD, 2013; Nadler & Nadler,
1990). The next section will discuss each level of evaluation.
Kirkpatrick’s Four-Level Evaluation Framework
Chapter 7
Figure 7.3: Use versus value in evaluation
Although levels 1 and 2 are most used and usually easiest to compile, levels 3, 4, and 5 (ROI)
are deemed to be the most valuable information in assessing training effectiveness, but they
require complex calculations.
100
80
60
40
20
0
Reactions of
participants
Evaluation
of learning
Evaluation
of behavior
Evaluation
of results
Return on
investment
Percentage who use the corresponding level to any extent
Percentage who say this level has high or very high value
Source: Adapted from American Society for Training & Development. (2013). State of the industry report. Alexandria, VA: ASTD.
f07.03_BUS375.ai
7.3 Kirkpatrick’s Four-Level Evaluation Framework
Perhaps the best known and most drawn-upon framework for summative evaluation was
introduced by Donald Kirkpatrick (Neirotti & Paolucci, 2013; Phillips, 2003; Piskurich, 2010;
Vijayasamundeeswari, 2013; Wan, 2013), a Professor Emeritus at the University of Wisconsin
and past president of the ASTD. Kirkpatrick’s four-level training evaluation taxonomy—
first published in 1959 in the US Training and Development Journal (Kirkpatrick, 1959; Kirkpatrick, 2009)—depicts both the short-term learning outcomes and the long-term performance outcomes (see Figure 7.4). Let us detail each level now.
Kirkpatrick’s Four-Level Evaluation Framework
Chapter 7
Figure 7.4: Kirkpatrick’s four-level evaluation
Donald Kirkpatrick’s four-level evaluation is the widely used standard to illustrate each level
of training’s impact on the trainee and the organization as a whole. Kirkpatrick’s typology
is a good starting point to frame discussions regarding the trainee’s reaction to the training
(level 1), if anything was learned from the training (level 2), if the trainee applied the
training through new behavior (level 3), and ultimately, if the training resulted in positive
organizational results (level 4).
4
Results
3
Transfer
2
Learning
1
Reactions
Level 1—Reaction: Did They Like It?
f07.04_BUS375.ai
A level 1 assessment attempts to measure the trainees’ reactions to the training they have
just completed (Kirkpatrick, 2009; Wan, 2013; Werner & DeSimone, 2011). Specifically, level
1 assessments ask participants questions such as:
•
•
•
•
•
•
Did you enjoy the training?
How was the instructor?
Did you consider the training relevant?
Was it a good use of your time?
Did you feel you could contribute to your learning experience?
Did you like the venue, amenities, and so forth?
A level 1 assessment is important not only to assess whether the trainees were satisfied with
the training session per se, but also—and perhaps more significantly—to predict the effectiveness of the next level of evaluation: level 2, learning (ASTD, 2013; Kirkpatrick, 2009; Morrison et al., 2012; Noe, 2012; Wan, 2013). That is, as level 1 reaction goes, so goes level 2
learning. According to a recent study (Kirkpatrick & Basarab, 2011), there was a meaningful
Kirkpatrick’s Four-Level Evaluation Framework
Chapter 7
correlation between levels 1 and 2, in that positive learner engagement led to a higher degree
of learning. This outcome specifically follows the idea of attitudinal direction (Harvey, Reich,
& Wyer, 1968; Kruglanski & Higgins, 2007), whereby a positive reaction (emotional intensity)
can lead to constructive conclusions, as depicted in the following formula:
Attitudinal Direction
Perception + Judgment → Emotion (Level 1)
(Positive) Emotion → Learning (Level 2)
With attitudinal direction in mind, a level 1 evaluation is attentive to the measurement of
attitudes, usually using a questionnaire. A level 1 survey includes both rating scales and openended narrative opportunities (Clark, 2013; Neirotti & Paolucci, 2013; Wan, 2013).
Typically, participants are not asked to put their names on the survey, based on the assumption that anonymity breeds honesty. Level 1 evaluation instruments are part of the training
materials that would have been created in the development phase of ADDIE.
Level 2—Learning: Did They Learn It?
In a level 2 assessment, we attempt to measure the trainees’ learning following the training
that they just completed (Kirkpatrick, 2009; Wan, 2013; Werner & DeSimone, 2011) and, specifically, in relation to the learning outcomes we established during the analysis and design
phases of ADDIE. Remember, learning outcomes can include cognitive outcomes (knowledge), psychomotor outcomes (skills), and affective outcomes (attitudes) (Noe, 2012;
Piskurich, 2010; Rothwell & Kazanas, 2011).
• With cognitive outcomes, we determine the degree to which trainees acquired new
knowledge, such as principles, facts, techniques, procedures, or processes (Noe, 2012;
Piskurich, 2010; Rothwell & Kazanas, 2011). For example, in a new employee orientation, cognitive outcomes could include knowing the company safety rules or product
line or learning the company mission.
• With skills-based or psychomotor learning outcomes, we assess the level of new
skills as a function of the new learning, as seen, for example, in newly learned
listening skills, conflict-handling skills, or motor or manual skills such as computer repair and replacing a power supply (Morrison et al., 2012; Noe, 2012;
Piskurich, 2010).
• Affective learning outcomes focus on changes in attitudes as a function of the new
learning (Noe, 2012; Piskurich, 2010). For example, trainees who learned a different
attitude regarding other cultures following diversity training or those who gained
a new attitude regarding the importance of safety prevention after a back injury–
prevention training class have achieved learning outcomes.
As with level 1, evaluations for level 2 are done immediately after the training event to determine if participants gained the knowledge, skills, or attitudes expected (Morrison et al., 2012;
Noe, 2012; Piskurich, 2010). Measuring the learned KSA outcomes of level 2 requires testing
to demonstrate improvement in any or all level 2 outcomes:
Kirkpatrick’s Four-Level Evaluation Framework
Chapter 7
• Cognitive outcomes and new knowledge are typically measured using trainer-
constructed achievement tests (such as tests designed to measure the degree of
learning that has taken place) (Duggan, 2013; Noe, 2012; Piskurich, 2010; Wan, 2013).
• For newly learned motor or manual skills, we can use performance tests, which
require the trainee to create a product or demonstrate a process (Duggan, 2013; Noe,
2012; Piskurich, 2010; Wan, 2013).
• Attitudes are measured with questionnaires similar to the questionnaires described
for level 1 evaluation, with the participants giving their ratings for various items (for
example, strongly agree, agree, neutral, disagree, or strongly disagree). They also
include open-ended items to let trainees describe any changed attitudes in their own
words (for example, “How do you feel about diversity in the workplace?”) (Duggan,
2013; Kirkpatrick, 2009; Noe, 2012; Piskurich, 2010; Wan, 2013).
With a level 2 posttraining learning evaluation, Kirkpatrick recommends first giving participants a pretest before the training and then giving them a posttest after the training (Cohen,
2005; Kirkpatrick, 1959; Kirkpatrick, 2009; Phillips, 2003; Piskurich, 2010) to determine if
the training had any effect, positive or negative. Creating valid and reliable tests is not a casual
exercise; in fact, there is a credential one can attain to become an expert in testing and evaluation (http://www.itea.org/professional-certification.html). Does the test measure what it
is intended to measure? If the same test is given 2 months apart, will it yield the same result?
HRD in Practice: A U.S. Department Uses Level 2 Evaluation
The U.S. Department of Transportation uses oral quizzes or tests for level 2 evaluation. Oral
quizzes or tests are most often given face-to-face and can be conducted individually or in a
group setting. Here is a typical example of the department’s level 2 oral quizzing:
1. When it comes to Highway Safety tell me two safety challenges you are facing right now
in your state or region.
2. What are “special use” vehicles and what is special about them?
3. What type of crossing is required for train speeds over 201 km/h (125 mph)?
4. Identify the following safety device? …
5. Define what a passive device is? Can anyone give me an example of a passive device?
6. What are three types of light rail alignments?
7. Why is aiming of roundels so critical? (p. 4)
Source: US Department of Transportation. (2004). Level II evaluation. Washington, DC: Author. Retrieved from https://www.nhi.fhwa.dot.gov/resources/
docs/Level%20II%20Evaluation%20Document.pdf
Consider This
1. Do you think this is a good way to evaluate trainees’ knowledge? Why or why not?
2. Do you think it is better to conduct this oral quiz in a group or individually. Explain your
reasoning.
3. What suggestions could you provide to improve the level 2 oral quizzes for the U.S.
Department of Transportation?
Kirkpatrick’s Four-Level Evaluation Framework
Chapter 7
Level 3—Behavior: Did They Apply It?
A level 3 evaluation assesses the transfer of training; that is, do the participants of the training program apply their new learning, transferring their skills from the training setting to the
workplace, and as a result, did the training have a positive effect on job performance? Level 3
evaluations specifically focus on behavioral change via the transfer of knowledge, skills, and
attitudes from the training context to the workplace.
However, before assessing skills transfer to the job, let us consider a practicality to the transfer of training evaluation: We must allow trainees a sufficient amount of time and opportunity to apply the training skills in the workplace (Piskurich, 2010). The amount of time will
depend on numerous factors, including (ASTD, 2013; Cohen, 2005; Morrison et al., 2012; Noe,
2012; Wan, 2013):
• the nature of the training,
• the opportunity available to implement the new KSAs, and
• the level of encouragement from line management.
Typically, we can confirm transfer by observing the posttrained participants and conducting work sampling (Kirkpatrick, 2009; Noe, 2012; Wan, 2013); evaluation can occur 90 days
to 6 months posttraining (Kirkpatrick, 2009; Tobias & Fletcher, 2000). Figure 7.5 shows an
example of level 3 training results.
Furthermore, as we will discuss in more detail in Chapter 8:
• positive transfer of training is demonstrated when we observe positive changes in
KSAs, and
• negative transfer is evident when learning occurs, but we observe that KSAs are at
less-than-pretraining levels (Noe, 2012; Roessingh, 2005; Underwood, 1966).
As discussed in Chapter 2, a trainee may have learned from the training but not be willing to
apply the training to the workplace for several reasons. It may sound something like, “Oh, I
know how to do it, but I am not doing it for you.” This is known as zero transfer of training,
in which learning occurs, but we observe no changes in trainee KSAs. So, and perhaps not
surprisingly, there is not a strong positive correlation between level 2 learning and level 3
behavior (Kirkpatrick & Basarab, 2011). That is, just because trainees learn something does
not mean they will necessarily apply it. As discussed in previous chapters, irrespective of
learning the new KSAs and being able to apply them to the workplace, the trainee must also
be willing to apply them.
Level 4—Results: Did the Organization Benefit?
With a level 4 evaluation, the goal is to find out if the training program led to improved bottomline organizational results (such as business profits). Similar to the correlation between
levels 1 and 2, studies have shown a correlation between levels 3 and 4 (Kirkpatrick, 2009);
specifically, if employees consistently perform critical on-the-job behaviors, individual and
overall productivity increase.
Kirkpatrick’s Four-Level Evaluation Framework
Chapter 7
Level 4 outcomes can include other major results that contribute to an organization’s effective functioning. Level 4 outcomes are either changes in financial outcomes or changes in
other metrics (for example, excellent customer service) that should indirectly affect financial
outcomes at some point in the future; these are known as performance drivers (Swanson,
1995; Swanson & Holton, 2001). Here are some examples of level 4 performance drivers and
outcomes (Cohen, 2005; Kirkpatrick, 2009; Phillips, 2003; Piskurich, 2010):
•
•
•
•
•
•
•
•
•
•
•
•
•
Improved quality of work
Higher productivity
Reduction in turnover
Reduction in scrap rate
Improved quality of work life
Improved human relations
Increased sales
Fewer grievances
Lower absenteeism
Higher worker morale
Fewer accidents
Greater job satisfaction
Increased profits
Isolating the Effects of Training
A major challenge to evaluation training’s effectiveness is isolating any subsequent performance improvement to the training itself. That is, improved performance may correspond
to the timing of the training but may not be linked to new training itself. Phillips (2003)
attributes this to the need for isolation. For example, Cohen (2005) described the following
scenario:
Let’s say training was focused on new selling techniques for an organization’s
sales reps and the post-training assessment of sales and call volume are found
to be significantly better than the pre-training amounts; this change could be as
much due to an upward turn in the economy as it is to the training itself. (p.23)
In this case linking the improvement to training would be incorrect, so we must protect
against erroneously ascribing performance improvement to nontraining reasons. To mitigate
this possibility, along with using pretests and posttests in level 2, Kirkpatrick (1959, 2009)
also recommends using control groups to statistically manage and separate the impact of
other variables. Control groups do not receive the training, or they go through other training unrelated to the training of interest, so we can assess the unique effect of the training
intervention. In Cohen’s example, a control group would include sales reps not subjected
to the specific training program, and then the control group’s performance would be compared to the trained group (known as the experimental group) of sales reps (Cohen, 2005;
Kirkpatrick, 1959; Kirkpatrick, 2009; Phillips, 2003; Piskurich, 2010).
Level 4 outcomes in particular may be difficult to isolate to the training program. This is
because in order to assess any of the level 4 outcomes, more time must elapse to make a complete assessment. For example, an organization might have to wait 2 or 3 fiscal quarters to see
Kirkpatrick’s Four-Level Evaluation Framework
Chapter 7
if decreased turnover or higher productivity follow training on those topics. As a result, by the
time of assessment, other factors may have had a chance to affect the level 4 outcomes. This is
what Sanders, Cogin, and Bainbridge (2013) called a confounding variable, or another factor that obscures the effects or the impact of the training (Guerra-López, 2012). In sum, not
unlike a 7-day weather forecast, a level 4 evaluation—although still valuable data—is usually
more difficult to credit to the original training because it is the most removed from the training event (Johnson & Christensen, 2010; Kirkpatrick, 2009; Sonnentag, 2003).
Linking Kirkpatrick Outcome Levels to the Performance Formula
Remember that in Chapter 2, we broke down workplace performance by understanding what
components make up job performance; specifically, an outcome of three variables:
• Ability—the employee’s capacity to perform the job; collectively, their KSAs
• Motivation—the employee’s willingness to perform the job voluntarily
• Environment—anything within the organizational environment (such as the supervisor, systems, and coworkers) that would affect the employee’s job performance
The Performance Formula
Performance = f(KSAs × M × E)
KSAs = Ability; M = Motivation; E = Environment
Using Kirkpatrick’s taxonomy (see Figure 7.5), we can see where summative outcomes are
expressed within employee performance (Blanchard & Thacker, 2010; Mitchell, 1982).
Figure 7.5: Synthesizing Kirkpatrick and the performance formula
By synthesizing Kirkpatrick and the performance formula, we can illustrate a training’s
impact not only on employee performance, but also on organizational performance in total.
Learning
Reaction
4
Results
Prior
Level 4
Performance = f(KSAs × M × E)
3
Transfer
2
Learning
Level 3
1
Level 2
Level 1
Reactions
Performance = f(KSAs × M × E)
Future state of
Level 4
∑
Summation of
all trainees Level 3
f07.05_BUS375.ai
Kirkpatrick’s Four-Level Evaluation Framework
Chapter 7
As Figure 7.5 shows, posttraining employee performance (level 3) is dependent on the effectiveness of both levels 1 and 2, reaction and learning. Specifically, the newly learned knowledge and skills are in level 2, learning, and the attitudes and motivation toward the new
learning are in level 1, reaction. Importantly, posttrained performance is both contingent on
and subsequently affects the organizational environment level 4 outcomes. Specifically, posttrained employee performance is subject to the antecedent state of the organizational environment (for example, the quality and state of the departmental supervision would affect
the efficacy of the posttraining employee performance). However, it is also expected that the
collective performance from the posttrained employee base would ultimately influence and
affect the future state of the organizational environment and organizational outcomes and
show itself in level 4 outcomes such as improved customer service, more efficient systems,
and reduced error rates.
HRD in Practice: The Case of the $25,000 Hello
Adam did a double take at the final invoice the consultants had faxed in.
$7,000—the bold digits jumped out at him. Adding this invoice to their first two invoices, the
total for the customer service training was now close to $25,000.
Man, this training was expensive! Adam thought.
It had all started because the receptionist had greeted a caller with a dry hello instead of
giving a pleasant greeting and introducing herself, he remembered. They had had a few
customer complaints about the receptionist’s lack of pleasantness, but unfortunately, on this
day the caller was the owner, Mr. Lager. “What kind of message of customer service are we
sending to folks, Adam?” Lager had asked. “I want those receptionists to make the callers feel
like we are a likeable and friendly company. Take care of it, and ASAP!”
Since Adam was in charge of administration, he contracted a customer service training firm
immediately. And it seemed to be good training, too. It had spanned 2 months, and all the
employees who dealt with customers were required to take it. Adam received reports that
the trainers were very good; the sessions were said to be fun and informative. The trainers
made sure the trainees learned new techniques about providing excellent customer service
by requiring each attendee to pass a customer service test. All the trainees had earned a
certificate to demonstrate the new learning.
In fact, now, after the training, anyone who called into the company heard a pleasant and
happy greeting: “Hello, So-and-So speaking. How can I help you?”
But, $25,000? Was it worth the expense? Adam pondered. Would this be considered a
questionable return on the company’s training investment?
Consider This
1. What types of financial data could Adam review to establish the monetary benefits of the
training to support the $25,000 expense and a positive return on the training investment?
2. What could Adam point to as proof of successful level 1 evaluation?
3. Success in Kirkpatrick’s level 3 is demonstrated in which part of the case?
Return on Investment
Chapter 7
7.4 Return on Investment
As the case of the $25,000 hello illustrates, not only do we want new learning to be applied
to the workplace and to impact organizational performance, we also want to do that in the
most cost-effective and efficient way. Summative evaluation should, in the end, lead to judgments on the value and worthiness of a training program; therefore, we also evaluate the cost
benefit of a training program and evaluate return on training investment, the so-called level
5. What Donald Kirkpatrick was to levels 1 to 4, Jack Phillips is to level 5.
Phillips is an internationally renowned expert on measuring the return on investment of
human resource development activities. Over the past 20 years, Phillips has produced more
than 30 books on the subject of ROI and has been a leading figure in the debate about the
future role of human resources (Noe, 2012; Phillips, 2003; Piskurich, 2010). ROI, or level 5,
evaluates the benefits of the training versus the costs. Specifically, at this level we compare
the monetary benefits from the program with the costs to conduct the training program (Noe,
2012; Phillips, 2003; Piskurich, 2010; Russ-Eft & Preskill, 2009).
According to Phillips (2003), the ROI measurement must be simple, and the process must be
designed with a number of features in mind. The ROI process must:
•
•
•
•
•
•
•
•
be simple,
be economical to implement,
be theoretically sound without being overly complex,
account for other factors that can influence the measured outcomes after training,
be appropriate in the context of other HRD programs,
be flexible enough to be applied in pre- and posttraining,
be applicable to all types of data collected, and
include the costs of the training and measurement program.
The two common ways to express training’s return on investment are a benefit–cost ratio
(BCR) and a return on investment (ROI) percentage. To find the BCR, we divide the total
dollar value of the benefits by the cost, as shown in the following formula:
BCR = (Total Dollar Value of Benefits) ÷ (Cost of Training)
We determine ROI percentages by subtracting the costs from the total dollar value of the benefits to produce the dollar value of the net benefits; these are then divided by the costs and
multiplied by 100 to develop a percentage:
Total Dollar Benefits – Costs of Training = Net Benefits
Net Benefits ÷ Costs × 100 = ROI
So, for example, if a traditionally delivered training program produced total benefits of
$221,600 with a training cost of $48,200, the BCR would be 4.6. That is, for every dollar
invested, $4.60 in benefits is returned. The ROI, therefore, would be 360%. According to
research conducted by SyberWorks, because e-learning alleviates the need for trainee and
trainer travel, e-learning has ROIs that regularly outperform traditionally delivered training
(Boggs, 2014).
Return on Investment
Chapter 7
Did You Know? Training ROI
Not all return on investment is created equal! Depending on the industry and/or type of
training, the ROI (measured by the BCR) will vary by sector, as shown in Table 7.1. As a
result, it is difficult to formulate a rule of thumb about what an appropriate or fair ROI
should be for a given training intervention. ROI will necessarily differ from organization to
organization, based on variables such as required financial margins, stakeholder preferences,
organizational culture, and overall corporate mission. In sum, and according to training ROI
guru Jack Phillips, ROI sometimes is simply used qualitatively, just to see if a program is
working or not.
Table 7.1 Examples of benefit–cost ratio per industry
Industry
Training program
BCR
Bottle company
Management development
15:1
Commercial bank
Electric utility
Oil company
Health care firm
Sales training
12:1
Customer service
5:1
Soft skills
Team training
5:1
14:1
Source: Based on Phillips, J. J. (2003). Return on investment in training and performance improvement programs. Oxford, England: Butterworth-Heinemann.
In context, the significance of ROI—and training itself—means different things to different
people; that is, different constituencies have different perceptions of ROI evaluation. For
example, a board of directors may see a big picture of how the training affects the company’s
ability to achieve its corporate goals: The finance department may be looking to see how
training stacks up financially against other ways to invest the company’s money; the department manager may be solely concerned with the impact on performance and productivity in
achieving department goals; and the training and development manager may be concerned
with how training programs affect the credibility and status of the company’s training function (Hewlett-Packard, 2004; Phillips & Phillips, 2012; Russ-Eft & Preskill, 2009).
While ROI is seen as beneficial, determining ROI can be a time-consuming endeavor; in fact,
for that reason, Phillips (2003) asserts that evaluating the ROI of a learning event is not appropriate in every situation. Specifically, Phillips and Phillips (2012) suggest that calculating ROI
does not add value in the following situations:
Return on Investment
Chapter 7
• If activities are very short, it is unlikely that any significant change in behavior will
have resulted.
• If activities are required by legislation or regulation, evaluators will have little power
to initiate changes because of their findings.
• If activities are used to provide learners with the basic technical know-how to perform their role, ROI data will be meaningless. Here, Phillips argues that evaluating to
level 3 is more appropriate in these situations because the training is not optional.
Hard Data Versus Soft Data
Part of the overall challenge in computing returns on investment in training concerns how
we determine costs and benefits with regard to tangible and intangible data. For example,
intangible or indirect training benefits such as customer satisfaction, improved work relationships, and organizational morale are more difficult to put a dollar amount on than are
tangible or direct benefits such as lower turnover, fewer workplace injuries, and decreased
workers’ compensation premium costs. Training costs, too, can be direct or indirect. Direct
costs include all expenses related to facilitating the training; examples are the cost of hiring
a consultant, conference room fees, equipment rental, and employee travel costs (Piskurich,
2010). Indirect costs of training may include such personnel expenses as salary costs and the
costs of lost sales while employees are at training (Piskurich, 2010).
Tangible and direct data is easier to memorialize and list, as well. Training expense, for example, comes directly off an organization’s income statement. Well-trained workers, although
an asset that serves as a good predictor of the tangible outcomes, are considered off-balancesheet assets and are not as easily tracked on the organizational accounting systems (Brimson,
2002; Weatherly, 2003).
Data Gathering Methods
We need data to compute ROI, and we can choose from a variety of data gathering methods.
As Figure 7.6 depicts, a review of data gathering methods shows that follow-up surveys of
participants, action planning—such as “asking participants to isolate the impact of the training” (Phillips & Phillips, 2012, p. 95)—performance records monitoring, and job observation
were the preferred data collection methods.
Return on Investment
Chapter 7
Figure 7.6: Data gathering methods
In gathering data to compute ROI, each method has its pros and cons. Methods vary from
surveys (the most popular method in a recent survey) to interviews and focus groups, which
are more complex and take more time.
Follow-up surveys of
participants
Action planning
Performance records
monitoring
Observation on
the job
Program follow-up
session
Follow-up surveys of
participants’ supervisors
Interviews with
participants
Interviews with
participants’ supervisors
Follow-up focus
groups
0
10
20
30
40
50
60
Percentage who use these approaches to a high or very high extent
Source: American Society for Training & Development, 2013; Phillips & Phillips, 2012.
f07.06_BUS375.ai
Each data gathering method has its unique advantages and disadvantages; this includes specific consideration to and trade-offs between data collection time and the cost of collecting
the data, as well as the fact that some data gathering methods may require a special skills set
(for example, how to conduct a focus group). Additionally, each method offers aspects of soft
and/or hard cost–benefit data and, as a result, subsequent analyses may be more complex.
As the next section will discuss, because of these and other reasons, evaluation is many times
postponed or neglected outright.
Evaluation: Essential, but Often Neglected
Chapter 7
7.5 Evaluation: Essential, but Often Neglected
Perhaps, and not surprisingly so, many organizations neglect or overlook the higher levels of
evaluation. Some surveys show that only about 20% of organizations conduct a formal evaluation of training’s effectiveness (ASTD, 2013; Brown & Gerhardt, 2002; Noe, 2012; Russ-Eft &
Preskill, 2009; Wang & Wilcox, 2006; Werner & DeSimone, 2011).
The reasons for not conducting training evaluation are varied. Recently, Russ-Eft & Preskill
(2009) researched the prevailing reasons why evaluation is not done more often within organizations; notably, their findings include the view that organizations do not value evaluation
in general. This may be a function of many things, including the organization lacking expertise
in performing evaluations, a fear as to what the evaluation may yield, and even the practical
rationale that no one has asked for it!
In the final analysis, neglecting evaluation is not only unprofessional, it is may also be unethical (see the Food for Thought feature box titled “Application of Evaluation”). We will look
further into the ethics of training in Chapter 10.
Food for Thought: Application of Evaluation
There are organizations that prioritize quality evaluations to maintain the integrity of the
business. For example, the American Evaluation Association (http://www.eval.org) includes
high-quality evaluation as part of its code of ethics value statements for organizations
that would be socially responsible as it relates to evaluation practices. Specifically, the
association’s value statements in the practice of evaluation are as follows:
• We value high quality, ethically defensible, culturally responsive evaluation practices
that lead to effective and humane organizations and ultimately to the enhancement of
the public good.
• We value high quality, ethically defensible, culturally responsive evaluation practices
that contribute to decision-making processes, program improvement, and policy
formulation.
• We value a global and international evaluation community and understanding of
evaluation practices.
• We value the continual development of evaluation professionals and the development
of evaluators from under-represented groups.
• We value inclusiveness and diversity, welcoming members at any point in their career,
from any context, and representing a range of thought and approaches.
• We value efficient, effective, responsive, transparent, and socially responsible
association operations. (American Evaluation Association, 2013)
Consider This
1. What does the American Evaluation Association mean by culturally responsive evaluation
practices?
2. How would ethical evaluation within an organization impact the public good?
Evaluation: Essential, but Often Neglected
Chapter 7
Even with its ethical obligations, at its core, evaluation’s objective is not only to ascertain if
organizational training with its respective programs are effective, but also, if training is ineffective, to produce data so as to hold those responsible for training accountable as well.
Sampling of Evaluation Models
Besides Kirkpatrick’s and Phillips’s, there are, of course, other evaluation models. However—
and perhaps not surprisingly—many of the evaluation models are variations on the same
themes. That is, evaluation models tend to assess the individual, process, and organizational
levels, as well as consider the environment or context in which the training takes place. Let us
look at some other popular evaluation models used.
Stufflebeam’s CIPP
The CIPP model of evaluation was developed by Daniel Stufflebeam and colleagues in the
1960s. CIPP is an acronym for “context, input, process, and product.” This evaluation model
requires the evaluation of context, input, process, and product in judging a program’s value.
CIPP is a decision-focused approach to evaluation; it emphasizes the systematic provision of
information for program management and operation. As shown in Table 7.2, the CIPP model
is an attempt to make evaluation directly relevant to the needs of decision makers during a
program’s different phases and activities.
Table 7.2: The CIPP model of evaluation
Aspect of evaluation
Type of decision
Kind of question answered
Context evaluation
Planning decisions
What should we do?
Process evaluation
Implementing decisions
Are we doing it as planned? And if
not, why not?
Input evaluation
Product evaluation
Structuring decisions
Recycling decisions
How should we do it?
Did it work?
Source: Stufflebeam, D. L., & Shinkfield, A. J. (2007). Evaluation theory, models, and applications. New York: Wiley. Reprinted with permission.
Kaufman’s Five Levels of Evaluation
Roger Kaufman (Kaufman, 1999) originally created a four-level assessment strategy called
the organizational elements model; a modification to the model resulted in the addition of
a fifth level, which assesses how the performance improvement program contributes to the
good of society in general, as well as satisfying the client. Kaufman’s evaluation levels are
shown in Table 7.3.
Evaluation: Essential, but Often Neglected
Chapter 7
Table 7.3: Kaufman’s five levels of evaluation
Level
Evaluation
Focus
5
Societal outcomes
Societal and client responsiveness, consequences and payoffs.
3
Application
Individual and small group (products) utilization within the organization.
4
2
1b
1a
Organizational output
Acquisition
Reaction
Enabling
Organizational contributions and payoffs.
Individual and small group mastery and competency.
Methods’, means’, and processes’ acceptability and efficiency.
Availability and quality of human, financial, and physical resources input.
Source: Kaufman, R. (1999). Mega Planning: Practical Tools for Organizational Success: SAGE Publications. Excerpted from p.6 Table 1.1 of Kaufman. R. (2008) The
Assessment Book, HRD Press. ISBN 9781599961286. Reprinted with permission.
CIRO: Context, Input, Reaction, and Outcome
The CIRO (context, input, reaction, and outcome) four-level approach was developed by Peter
Warr, Michael Bird, and Neil Rackham (Warr, Bird, & Rackham, 1971). Adopting the CIRO
approach to evaluation gives employers a model to follow when conducting training and
development assessments. Employers should conduct their evaluation in the following areas:
• C—Context or environment within which the training took place. Evaluation here
goes back to the reasons for the training or development event or strategy. Employers
should look at the methods used to decide on the original training or development
specification. Employers need to look at how the information was analyzed and how
the needs were identified.
• I—Inputs to the training event. Evaluation here looks at the planning and design
processes, which led to the selection of trainers, programs, employees, and materials.
Determining the appropriateness and accuracy of the inputs is crucial to the success
of the training or development initiative.
• R—Reactions to the training event. Evaluation methods here should be appropriate
to the nature of the training undertaken. Employers may want to measure the reaction from learners to the training and to assess the relevance of the training course to
the learner’s roles. Assessment might also look at the content and presentation of the
training event to evaluate its quality.
• O—Outcomes of the training event. Employers may want to measure the levels at
which the learning has been transferred to the workplace. This measurement is easier
when the training involves hard and specific skills—as would be the case for a train
driver or signal operator—but is harder for softer and less quantifiable competencies,
including behavioral skills. If performance is expected to change because of training,
then the evaluation needs to establish the learner’s initial performance level.
It is fair to say that, although many of the evaluation models may vary around the same
themes, certain evaluation models may be more appropriate to use than others, depending on
the context and focus. For example, whereas the Kirkpatrick and CIPP models focus on training evaluation, they do not underscore the evaluation of the financial returns on investment
like Phillips’s model. Likewise, unlike other tactical evaluation models, Kaufmann’s model,
Evaluation: Essential, but Often Neglected
Chapter 7
because of its focus on societal outcomes, is not limited to training initiatives and may be
used more broadly in other evaluative contexts such as consumer marketing or evaluating an
organization’s corporate citizenship efforts.
HRD in Practice: Back to the Case of the $25,000 Hello
When we last left Adam, he was pondering whether the $25,000 expense for the customer
service training was worth it. Adam wondered, “Would this be considered a questionable
return on the company’s training investment?” After performing a return on investment for
the training program, Adam realized that, in fact, the training was not cost effective, with a
–1.6% ROI. Tables 7.4 and 7.5 show some of Adam’s analysis, in which he found the benefits
of the training were $24,615 but the direct costs were $25,000:
Table 7.4: Adam’s ROI analysis
Task
Result
1. Focus on a unit of measure.
Reduction in number of complaints.
2. Determine a value of
each unit.
3. Calculate the change in
performance data.
4. Determine an annual
amount for the change.
5. Calculate the total value of
the improvement.
Take an average cost per complaint; include direct and indirect
costs—in this case $547.
Six months after the program, there were 50 fewer complaints,
with 30 of those directly attributed to supervisors as a result of
techniques taught in the training program.
It was decided an annual reduction of 45 complaints was conservative and realistic.
Total value of improvement attributable to training was 45 × $547
= $24,615.
Table 7.5: Other organizations’ training ROI that Adam researched
Study or setting
Target group
Program
description
Business measures
ROI
Verizon
Communications
Training staff,
customer service
Customer service
skills training
Reduced call
escalations
(–85%)
U.S. Department of
Veterans Affairs
Managers,
supervisors
Leadership
competencies
Cost, time savings, reduced staff
requirements
159%
Retain Merchandise
Company
Sales associate
Retails sales skills
Increased sales
revenues
118%
Source: Phillips, J. J., & Phillips, P. P. (2006). The ROI fieldbook. Copyright © 2006 International Society for Performance Improvement. New York: Wiley.
Reprinted with permission of John Wiley and Sons.
During his research on training evaluation, Adam saw that the results could have been much
worse; in fact, he read that Verizon had a more extensive customer service training that had
an astounding –85% ROI! “Wow!” Adam uttered aloud, “Evaluation cannot be overlooked!”
(continued)
Summary and Resources
Chapter 7
Consider This
1. Adam agreed that the skills outcome for the customer service training was a success.
Specifically, after the training, anyone who now called the company heard a pleasant and
happy greeting: “Hello, So-and So-speaking. How can I help you?” In the final analysis,
does it really matter if the ROI was -1.6%?
2. What measures could Adam have taken to ensure a positive ROI?
3. Do you think the training company that Adam contracted had an ethical obligation to
ensure a positive ROI? Specifically, could they have charged less and gotten the same result?
Summary and Resources
Chapter Summary
• The focus of formative evaluation is the evaluation of the process, as the training is
forming; summative evaluation, however, focused on the outcomes and specific training results—both the learning and performance.
• For summative evaluation, we used Kirkpatrick’s four-level taxonomy, which is
depicted as a pyramid showing the four stages of evaluation: reaction, learning,
behavior, and results.
• The chapter also discussed return on investment, sometimes known as level 5. With
ROI, we can check to see how cost-effective and efficient the training program is,
which in turn can lead to judgments on the value of training. A particular challenge in
computing returns on investment in training concerns tangible data versus intangible
data, also known as hard versus soft data.
• Finally, we discussed why organizations often neglect evaluation. The number one
reason is that organization members do not value evaluation. In sum, neglecting training evaluation may be not only unprofessional, but also unethical.
Posttest
1. Summative evaluation of training assesses both
a. learning-based and performance-based outcomes
b. training processes and training outcomes
2.
c. readability and usability of the training materials
d. beta testing and pilot testing results
to their jobs.
.
occur(s) when employees apply new information learned in training
a. Summative evaluation
b. Accountability
c. Transfer of training
d. Organizational results
Summary and Resources
Chapter 7
3. Trainees who gain a new attitude after diversity training have achieved which type of
learning outcome?
a. a cognitive outcome
b. a psychomotor outcome
c. an affective outcome
d. a performance-based outcome
4. A trainee who learned from a training but does not demonstrate any resulting change
in knowledge, skills, or attitudes is exhibiting
.
a. negative transfer
b. passive transfer
c. null transfer
d. zero transfer
5. Which possible outcomes of training, in Kirkpatrick’s model, are the hardest to isolate
to a particular training program?
a. level 1
b. level 2
c. level 3
d. level 4
6. A manager translates a safety training’s results into a dollar amount, determining
how much money has been saved by reducing workplace accidents. She next divides
this amount by the total amount the company paid to hold the training. Which calculation is the manager using?
a. a net benefit indicator
b. the benefit–cost ratio
c. the return on investment percentage
d. a monetization equation
7. Which of the following is considered an indirect or intangible benefit of training?
a. reduced job turnover
b. decreased injuries in the workplace
c. improved customer satisfaction
d. lower costs of workers’ compensation
8. The number one reason more organizations do NOT conduct formal evaluations of
trainings is that
.
a. organization members do not believe evaluation is valuable
b. organization members lack understanding of the evaluation’s purpose
c. the costs of an evaluation outweigh the benefits
d. the organization has had previous negative experiences with evaluation
Summary and Resources
Chapter 7
9. Which model of program evaluation looks at how a program not only satisfies a client
but also contributes to society?
a. Kirkpatrick’s four-level taxonomy
b. The CIPP model
c. Kaufman’s five levels of evaluation
d. The CIRO approach
10. Which evaluation model focuses on evaluation as an approach to decision making?
a. Kirkpatrick’s four-level taxonomy
b. The CIPP model
c. Kaufman’s five levels of evaluation
d. The CIRO approach
Assess Your Learning: Critical Reflection
1. Explain how formative evaluation is linked to summative evaluation in the training
evaluation process.
2. How dependent is level 1, reaction, on level 2, learning? How might a trainee learn
something from a training workshop he or she thought was awful?
3. Could you make a case for continuing with a training program that is yielding a negative ROI?
4. If a training program is found to have a positive ROI, does this measure indicate that
the training should be renewed? If not, why?
5. Describe some ethical problems that might occur if training evaluation is neglected.
6. As it relates to levels 2 and 3, learning and behavior, what is meant by the statement
“not everything learned is observable?”
Additional Resources
Web Resources
Jack Phillips’s ROI Institute: http://www.roiinstitute.net
The Bottom Line on ROI: The Jack Phillips Approach. Canadian Learning Journal, 7(1),
Spring 2003:
http://www.learning-designs.com/page_images/LDOArticleBottomLineonROI.pdf
Evaluation of Training Effectiveness: http://www.youtube.com/watch?v=5HqEfxz5YNU
For information on outcome evaluation:
http://www.tc.umn.edu/~rkrueger/evaluation_oe.html
For more on Kirkpatrick’s four levels of evaluation model:
http://www.businessballs.com/kirkpatricklearningevaluationmodel.htm
A government website on training and development policy:
http://www.opm.gov/wiki/training/Training-Evaluation.ashx
More information on how to measure training effectiveness:
http://www.sentricocompetencymanagement.com/page11405617.aspx
Summary and Resources
Chapter 7
More on formative and summative evaluation:
http://www.nwlink.com/~donclark/hrd/isd/types_of_evaluations.html
More on ROI in Training and Development: http://www.shrm.org/education/hreducation/
documents/09-0168%20kaminski%20roi%20tnd%20im_final.pdf and http://
www.shrm.org/Education/hreducation/Pages/ReturnonInvestmentTrainingandDevelopment.aspx
Measuring ROI on learning and development:
http://www.astd.org/Publications/Books/Measuring-ROI
Further Reading
American Society for Training & Development. (2013). State of the industry report. Alexandria, VA: ASTD.
Boggs, D. (2014). E-learning benefits and ROI comparison of e-learning vs. traditional training. Retrieved from SyberWorks website: http://www.syberworks.com/articles/elearningROI.htm
Clark, D. (2013). Introduction to instructional system design. Retrieved from Big Dog & Little
Dog’s Performance Juxtaposition website: http://www.nwlink.com/~donclark/hrd/
sat1.html
Kirkpatrick, D. L. (2009). Evaluating training programs: The four levels. Berrett-Koehler.
Phillips, J. J., & Phillips, P. P. (2012). Proving the value of HR: How and why to measure ROI.
Alexandria, VA: Society for Human Resource Management.
Piskurich, G. M. (2010). Rapid training development: Developing training courses fast and
right. New York: Wiley.
US Department of Health and Human Services. (2013). Tips and recommendations for successfully pilot testing your program. Retrieved from http://www.hhs.gov/ash/oah/
oah-initiatives/teen_pregnancy/training/tip_sheets/pilot-testing-508.pdf
Answers and Rejoinders to Chapter Pretest
1. true. Formative evaluation can be seen as a “try it and fix it” process, since it takes
place while the training is still being developed. Ideally, any deficiencies are uncovered before the program is offered to an external audience.
2. false. Although trainees’ feedback can reveal a lot about trainers’ strengths and weaknesses, this is not typically the main reason for evaluating whether trainees found a
session interesting and useful. More significantly, employees’ satisfaction with a training session predicts how much they learn from it.
3. false. Although useful in many situations, return on investment is time-consuming to
calculate and is NOT valuable in all situations. For example, trainings that are very
short, are required by legislation, or are necessary for learners to gain basic skills for
their roles will not benefit from having return on investment calculated.
Summary and Resources
Chapter 7
4. true. According to some surveys, only about 20% of organizations conduct formal
evaluations of the effectiveness of their trainings, despite the fact that many experts
consider this unprofessional.
5. true. The American Evaluation Association holds that high-quality evaluation is
an essential part of organizations’ social responsibility. High-quality evaluation is
included in the association’s code of ethics for organizations.
Answers and Rejoinders to Chapter Posttest
1. a. Summative evaluation looks at both the short-term learning-based outcomes and
the long-term performance-based outcomes of a training. Learning-based outcomes
include employees’ assessments of whether or not they learned anything, whereas
performance-based outcomes address how the training influenced the employees’
behavior or the organization’s return on investment.
2. c. Transfer of training describes the extent to which trainees apply what they learned
in the training to the workplace, transferring their new learning. It is a longer-term,
performance-based outcome measured in summative evaluation.
3. c. As opposed to cognitive outcomes, which link to fact- or procedure-based knowledge, and psychomotor outcomes, which link to skills, affective learning outcomes
describe changes in attitude as a result of the new learning.
4. d. In zero transfer of training, evaluations show that learning has occurred, but no
changes in KSAs are observed. This tends to occur when a trainee is able to apply
learning but is not willing to apply it.
5. d. Level 4 outcomes include improvements to the overall organization’s functioning
and bottom line. These are particularly difficult to isolate or attribute to a training,
because time must elapse before they can be evaluated. During that time, other factors may have influenced the overall organization, making it hard to know whether
the training was responsible for the results.
6. b. The benefit–cost ratio (BCR) divides the total dollar value of a training’s benefits by
the cost of the training. BCR is one common way of expressing a training’s return on
investment.
7. c. Lower job turnover, reduced workers’ compensation premiums, and decreased
workplace injuries are all examples of direct or tangible benefits. Improved customer
satisfaction, on the other hand, is considered an indirect or intangible benefit, along
with others such as improved work relationships and organizational morale. It is
harder to assign a dollar amount to these intangible benefits.
8. a. The most common reason organizations do not conduct evaluations is that they do
not yet understand the benefits that evaluation can bring. The explanations for this
are varied and may include a lack of understanding of how evaluation is used.
9. c. Kaufman’s original four-level organizational elements model was modified to add
a fifth level that addresses societal outcomes. This level looks at how a performance
improvement program benefits clients and society as a whole.
10. b. The CIPP model is a decision-focused approach that attempts to directly relate
evaluation to the needs of program decision makers. It emphasizes systematically
providing the information needed for program operation and management.
Summary and Resources
Chapter 7
Key Terms
accountability The willingness to accept
responsibility or to account for one’s actions.
achievement tests Tests designed to measure the degree of learning that has taken
place.
affective outcomes Attitudes; focuses on
changes in attitudes as a function of the new
learning.
antecedent state The organizational
environment prior to the training, on which
posttrained performance depends; for
example, how effective and efficient the
performance is.
cognitive outcomes Knowledge; outcomes
that show the degree to which trainees
acquired new knowledge, such as principles,
facts, techniques, procedures or processes.
confounding variable Any factor that
obscures the effects or the impact of the
training.
control group A group used in order to
statistically manage and separate the impact
of other variables so that the unique effect of
the training intervention can be assessed.
cost benefit The relationship between the
cost of an action and the value of the results.
experimental group A group of subjects
exposed to an experimental study.
four-level training evaluation taxonomy A theory developed by Donald
Kirkpatrick and used to determine the
effectiveness of the training and development process, depicting both the short-term
learning outcomes and the long-term performance outcomes at four levels: reaction,
learning, transfer, and results.
future state The posttraining organizational environment, on which performance
from a well-trained employee base has an
effect; for example, a more effective and
efficient environment.
isolation Isolating any subsequent performance improvement to the training itself.
learning outcomes Results that are established during the analysis and design phases
of ADDIE; cognitive outcomes (knowledge),
psychomotor outcomes (skills), and affective
outcomes (attitudes).
negative transfer A transfer demonstrated
when KSAs are at less-than-pretraining
levels.
organizational results Outcomes or
results that contribute to the functioning of
an organization, such as business profits.
performance drivers Changes in financial
outcomes or other metrics that should indirectly affect financial outcomes in the future.
performance tests Tests that require the
trainee to create a product or demonstrate a
process.
positive transfer A transfer demonstrated
when positive changes in KSAs are observed.
posttest A test administered after a program to assess the level of a learner’s knowledge or skill.
psychomotor outcomes Skills; assessment is based on the level of new skills as
a function of the new learning, as seen, for
example, in newly learned listening skills,
conflict-handling skills, or new motor or
manual skills.
questionnaires A set of evaluation questions asked of participants, who give their
ratings for various items (for example,
Strongly Agree, Agree, Neutral, Disagree, or
Strongly Disagree); or open-ended items
that allow participants to respond to any
changed attitudes in their own words (for
example, “How do you feel about diversity in
the workplace?”).
reaction The first level in Kirkpatrick’s
four-level training evaluation, in which the
evaluation assesses whether the trainees
liked the training session per se; it is also a
Summary and Resources
Chapter 7
good predictor of the effectiveness of the
next two levels of evaluation.
costs; sometimes called level 5 on top of
Kirkpatrick’s four-level training evaluation
taxonomy.
return on training investment (ROI) An
analysis that evaluates the cost benefit
of a training program via evaluation of
the benefits of the training versus the
zero transfer A transfer of training that
is demonstrated if learning occurs but no
changes are observed in KSAs.
return on investment (ROI) percentage
A percentage calculated by subtracting the
costs from the total dollar value of the benefits to produce the dollar value of the net
benefits, and then dividing this amount by
the costs and multiplying the result by 100
to produce a percentage.
transfer of training An evaluation that
assesses whether the participants of the
training program applied their new learning
from the training setting to the workplace;
the ability of trainees to apply to the job the
knowledge and skills they gain in training.
Purchase answer to see full
attachment