CWL Publishing Enterprises




From Training to Performance:
Pre- and Post-Training Activities to Increase Transfer
Timm J. Esque

This original article from The 1997 ASTD Training and Performance Yearbook by consulting editor Timm Esque provides a practical and complete overview of how to actually take actions that help ensure that people (1) use what they learn in training and (2) improve their performance as a result. The ideas come from Esque's experience as an instructional designer and performance consultant at Intel, one of America's most admired companies. It will be immediately evident that his points make sense for helping any company enhance the transfer of training to performance on the job. This approach helps move trainers, as Esque points out, from training provider to business partner.

* * *

Virtually all training interventions involve some sort of activity before the training is developed and after it is delivered. Needs assessment, pre- and post-testing, level 3 and 4 evaluation (per Kirkpatrick), development planning, post-training coaching, and use of incentives are just some of activities that quickly come to mind. It is interesting that, when you look at the way organizations account for their training expenditures, it is often difficult to discern any evidence of these activities. Those companies that make the effort to account for company-wide training costs tend to have accounting categories such as internal training and external training or training development and training delivery. This suggests that the training function in these companies, and presumably many others, is focused almost completely on the development and delivery of training.

One of the key goals of training in organizations is to transfer acquired knowledge and skills to the job, to change behavior, and ultimately to improve individual and organizational performance. According to Tom Gilbert, the father of performance technology, performance is a function of both behavior and the outputs of those behaviors (e.g., product designs, finished products, sales, and satisfied customers). It is the outputs that typically add value to a company's products and services (Gilbert 1979). Training can be developed and delivered, but if it's not transferred to the job in the form of specific behaviors and outputs, it is simply an overhead cost.

The literature on transfer of training suggests that only a very small percentage of training is ever transferred to the job (Newstrom 1985). Training that is not transferred cannot improve performance. At Intel, we are finding that pre- and post-training activities can have tremendous leverage for transferring trained knowledge and skills to the job. When the goal of training is transfer, pre- and post-training activities should receive at least as much attention as the development and delivery of training.

Some of the training organizations at Intel have been increasing the attention they give pre- and post-training activities in an effort to increasing transfer of training. I will describe several pre- and post-training activities aimed at increasing transfer, and the impact of training on performance in this article. This is not a comprehensive review of pre- and post-training activities or of tactics for increasing transfer. However, for training organizations currently focusing primarily on training development and delivery, this article should provide a starting place for the effort to improve both transfer of training and training's impact on organizational performance.

I'll addresses five different activities: two pre-training and three post-training. Then I'll describe their application at Intel and give some general advice about implementation. Some of these activities serve other purposes in addition to increasing transfer, but the article focuses on how these activities can improve transfer. Let's examine pre-training activities first.

Pre-Training Activities to Improve Transfer

1. Needs Assessment
Training has evolved from an art to a profession over the last 20 years. Unlike when I joined Intel in 1983, it seems that now the majority of training personnel have a formal education and/or training in instructional design. One basic tenet of instructional systems design (ISD) is to begin with a needs assessment. Needs assessments vary in method and scope, but the basic intention is to clarify the current and desired situations. What is the state of knowledge, behavior, or performance now, and what is the desired state? Numerous methods have evolved for use in different environments, and the best are more or less equally valid for making this determination. However, not all methods of needs assessment have the same impact on the transfer of training.

One factor that influences whether or not training will be transferred to the workplace is the relevance of the training to the trainees. Unfortunately, certain approaches to needs assessment can actually reduce the relevance of training. For example, needs assessment often begins with job analysis, in an attempt to capture a comprehensive picture of a job and break it down into logical pieces. To create a model of the job that is relevant to all incumbents over a long period of time, analysts often end up with a model that is largely out of context with the desired performance. In other words, the job is described as a laundry list of required knowledge, skills, and behaviors, without being linked to desired performance outcomes.

When training needs are analyzed in this fashion, the resulting training is in one sense relevant to anyone in a particular job position, but in another sense, not particularly relevant to anyone in that position at any particular time. From the perspective of trainers and of management, it must be relevant, because it is derived from the people doing the job. However, from the perspective of the people doing the job, only bits and pieces are relevant to their current desired performance, and there is often little guidance given as to which bits are relevant and when.

We can shed light on this apparent paradox by viewing it from another vantage point. What would a needs assessment look like that defines training needs relevant to a specific performer trying to achieve specific outcomes? Tom Gilbert developed a needs assessment (or front-end analysis) method that does just that. It is called the Exemplary Performance Improvement Chart, or EPIC.

In a nutshell, the EPIC approach begins by defining desired performance (valued outcomes or accomplishments) and then working with the performers to understand current relevant barriers to producing those outputs. At Intel, the method has mostly been used to understand barriers to rapid product development in the product divisions and, where possible, to remove those barriers. This has been accomplished by first documenting the outcomes of product development (those things we actually deliver to our customers) and then working with members of the product development team to understand the key sub-outputs they are responsible for and the barriers that have prevented them from producing those sub-outputs successfully in the past.

This analysis was initially done once for a whole product division, with the assumption that the same barriers apply to every product development team in the division. However, in this age of rapid change, it has proven more useful to conduct this analysis at the beginning of each new development project to ensure relevance. The EPIC method has since been adapted into a one day project kickoff meeting Intel calls Accomplishment-Based Project Planning (or Map Day). Of course, not all the barriers that get identified in Map Day can be removed through training. Some example barriers identified in one specific analysis included:

  • Unwillingness of management to trade-off product features in order to meet the desired schedule
  • Insufficient resources (e.g. 10 developers, but only six development systems)
  • Lack of design engineer understanding of the validation model
Only the last barrier required training for product development team members. This training needed to be just in time, because the validation technology changes rapidly. The first barrier also required training, but for management rather than for the development team members. Management was generally aware that schedule, scope, and resources cannot all be constrained on a project. But management had developed a habit of trying to constrain all three, and it required some training (and some painfully poor performance) to get management to exhibit the appropriate "trade-off" behavior.

The transfer of training in this case was excellent. All the participants in the training knew exactly why the training was being provided, and it was delivered in the context of successful performance on their current project. Training will transfer when it is provided to those people who have decided they need it -- just when they need it -- to be successful. But this requires the training developer to approach needs assessment from a different perspective. Gilbert's EPIC method is designed from the perspective of successful performance.

Probably the most important lesson we learned about implementing Gilbert's EPIC method is that his principles of analysis don't need to be followed as rigorously as the written descriptions of the method suggest. In fact, when Gilbert consulted at Intel in 1993, he only generally followed his own prescribed method. He also let the performers do most of the analysis themselves. He got them started and then provided some coaching on the method, but he did not take the role of analyst, feeding back to them what he had discovered they needed. These subtleties for implementing the EPIC are described in detail in another article (Esque 1995).

2. Contracting
It is easy for me to believe that the vast majority of training resources are in fact invested in development and delivery of training. The accepted role of training for many years has been to provide training, not necessarily to improve performance. You can tell that industry's top management accepts this role for training by the way they talk about the effectiveness of their training function. It is typical to hear CEOs quote how many millions of dollars their companies spend annually on training. These statements are rarely coupled with any comment on the return for these large training investments. A large training budget is viewed as the sign of a progressive company.

Historically, the training function has largely accepted the role of training provider. The role seems beneficial to everyone involved until the need arises for cost-cutting. All too often, that huge training investment suddenly starts to look like a relatively easy place to reduce costs. In good times there is no expectation for training to demonstrate a return on investment and, accordingly, in bad times it is nearly impossible for the training function to justify its costs.

In my opinion, an even worse danger of playing the training provider role is present in both good times and bad. When client organizations view training as simply a training provider, they also tend to view training as a panacea for all performance problems. When this mindset is present, the training function is often called in to implement training fixes for the client organizations, which then exempt themselves from responsibility for behavior changes and performance improvement. If, as I have been arguing, performance improvement depends on activities that occur before, during, and after training, then responsibility for performance improvement cannot just be delegated to the training function. An important question becomes: how can training get the client organization to view trainers as professionals who can assist in improving business performance, rather than simply as training providers?

One technique we've used at Intel to begin the move from training provider to business partner, is contracting. With the current trend toward outsource services, some training and performance professionals are actually billing back for their training and related services. In these cases, contracting is practically a requirement. But even outside of the bill-back situation, contracting can be an effective pre-training tool.

Training and performance improvement contracts can help achieve two primary objectives. The first is to clarify expectations of the service that will be provided and to define how the value or quality of those services will be assessed. This is an opportunity to begin talking with the client organization about what results it expects from the training services rendered. We need to find out what sorts of behavior or output changes the clients are looking for and how they will determine if these changes occurred or not.

A second useful objective of contracting is to clarify what responsibility the client maintains for ensuring that the changes in behavior and/or outputs happen. It is significantly easier to influence the client organization to do their part in supporting performance improvement if the client has already approved a contract that states clearly what the organization's responsibilities are. Some examples of client responsibilities might include the following:
  • Set the expectation that training participants should return from the training with a written action plan for implementing what they have learned (and then review these plans immediately after training, and again after enough time has passed to implement the action plan).
  • Send participants in groups of intact work teams, so that they can reinforce each other for using what they learned after training.
  • Hold follow-up "application sessions" in which training participants are encouraged to share with their peers which aspects of the training they have utilized and what lessons they learned.
Informal contracting for training and related services can and should remain a brief and simple process. The best informal contracts take the training professional less than an hour to draft and the client organization just a couple minutes to review. Contracts are typically one page, with plenty of white space. The idea is to begin to involve the client organization in the transfer process, not to create a bureaucracy.

It is important not to get overzealous about the performance results expected from training. Keep initial expectations about performance improvement conservative, and keep performance measures very simple (use existing performance indicators whenever possible). Remember, if the training function has had the role of training provider, the client probably hasn't really had any expectations about measurable performance improvement. It is much easier to build on modest success than to rebound from major disappointment.

The example client responsibilities provided above, suggest that post-training activities are largely the responsibility of the client organization. It is significantly easier to achieve transfer of training when the client organization develops an expectation that training should result in measurable performance improvement. As training professionals, however, we cannot just wait for the client organization to develop these expectations. There are several post-training activities that we can implement to directly impact the transfer of training. Let's now review some of these post-training activities.

Post-Training Activities for Increasing Transfer

3. Measurement
What gets measured is what improves. This maxim is taught at the best business schools but is not always taken to heart by training and performance professionals. When the training function does engage in the measurement of trained behaviors and outcomes, it too often does its measuring immediately after training. This measurement may help improve the content or delivery of the training to the next participants, but it has very little effect on the transfer of training to the work environment.

Kirkpatrick's ubiquitous model of the four levels of evaluation calls for two levels to occur well after training has occurred. Level three specifically evaluates the transfer of training, and level four evaluates whether transferred knowledge, behaviors or outcomes actually impact organizational performance (Kirkpatrick 1983). But many training functions are still not implementing levels three and four most of the time. It is possible that this is also related to the role of the training function as training provider. But if, in fact, what is measured is what gets improved, there is tremendous leverage for improving transfer of training, by simply measuring whether or not training has transferred.

Of course, the act of measurement doesn't really cause anything to improve, unless the results of that measurement are communicated. Where is all this measurement data supposed to go? In my experience, this data is often underutilized. If the data is fed back to anyone, it is usually to upper management. Unfortunately, the response from management can be less than encouraging. After all, if training's role has always been to provide training, it is also a new role for management to respond to transfer-of-training data. When management is not impressed with transfer-of-training data, the real potential for influencing transfer may be feeding back the transfer results to the trainees themselves.

At Intel, we have experimented with this tactic, and the preliminary results are very encouraging. During the training, the participants are shown historical transfer-of- training data, usually on a session-by-session basis. In other words, they are shown what percentage of participants from previous training sessions have actually implemented the lesson being taught. Participants are also told at this time that transfer will be measured for this session and that they will all be informed several weeks after training:

  • How this session performed as a group in comparison with previous sessions.
  • Which individuals from this session actually implemented the tool on the job.
This tactic works best when the lesson being taught is a specific method or tool that is relatively easy to measure. Intel has used an e-mail survey of all training participants to find out who implemented a lesson back on the job. Trainees who respond affirmatively get a follow-up call to verify usage of the lesson and to discover any lessons learned by the trainee that can be injected back into the training. It can be a very satisfying task to follow up with the trainees who have implemented the lesson. People love to talk about their own successes. In addition to influencing future transfer, the training function can gain a lot of credibility with its clients by performing follow-up activities.

There is plenty of literature on post-training measurement, but little if any focuses on using the data to influence the post-training behavior of training participants directly. Feeding back level four evaluation results to training participants can have as much or more influence on future transfer as feeding back level three results. Feeding back level four results will be the topic of the next section, titled "Networking" because the sharing of level four results directly among past trainees is essentially a form of networking. Intel has found networking to be a powerful post-training tool for increasing transfer of training.

4. Networking
When training professionals get together at an ASTD meeting, there is often time planned before, after, or in-between the formal programming for something called "networking." Historically, networking has been viewed as part of professional development, as opposed to serious work. But increasingly, networking is being viewed as an important component of organizational effectiveness (Lipnack and Stamps 1993). In today's high-speed environment, where everyone in the organization is connected to everyone else, relying on the formal hierarchy can sometimes reduce an organization's competitiveness. In certain cases, using networks to influence action and change is an appropriate shortcut to the formal chain of command.

Intel has begun experimenting with networks to increase transfer of training. As discussed in the previous section, a lot can be gained by simply feeding back evaluation data to the training participants. The impact on transfer of training is enhanced even more when the data fed back is level four data. In simple terms, what we feed back are success stories.

Much of the training that people (especially professionals) receive is conceptual in nature. We always provide the possible benefits of using these conceptual tools and, when we can, real examples of how the tool was successfully used by someone, somewhere. But neither of these are nearly as powerful as success stories from people in the same organization -- colleagues of the training participants. Success stories are brief descriptions of how and when the concepts were implemented, along with the impact they had on performance and the lessons learned. This becomes even more influential when there are numerous success stories, highlighting that lots of people are using these concepts right here in this organization and are being very successful in doing so.

One way to incorporate success stories back into training is to invite the successful users of the training as guest speakers at future training sessions. This serves several purposes. First, simply asking the successful users to tell their stories to their colleagues establishes them as experts, which to many people is a reward in itself. Second, the successful users are the best ones to answer detailed questions about implementing what is being taught and identify possible obstacles. Last but not least, Intel has found that getting successful users together at training sessions allows users to network with each other, which often results in cross-organizational sharing of lessons learned and a strengthening of commitment to the concepts being taught. Many successful users who have been given a chance to present their success stories and compare notes with other successful users end up returning to their job and influencing others in their organization to use the concepts, even before they attend the training!

In practice, Intel has used a combination of these two post-training activities in tandem. These practices are now spreading, but most of the early lessons learned are from the training of a specific management tool that all Intel managers are taught early in their management career. The tool had been taught at Intel for over five years before the post-training activities were added, but there was little evidence of transfer. After the first year using these post-training activities, over 50 individual success stories had been documented. Individual managers continue to implement the tool, and several organizations are now asking for assistance to implement the tool organization-wide.

Much of the success of this transfer case study goes to the individual managers who successfully implemented the tool. But it should be noted that the influence was generated completely by the actions of the training function and by the successful users. All this was accomplished without top management telling anyone that they were obligated to use the tool (although some top managers, having seen results in their organizations, are now obligating the new trainees to use it).

Here are some things you should remember when implementing these two post-training activities:

  • In the training, provide a business-oriented rationale for transferring the training to the job. Then explicitly state that you expect the training to be used on the job and describe how you will be follow up to assess its use.
  • Follow up with everyone, but spend your energy on the ones who transfer the training, not nagging the ones who don't.
  • Don't be a purist. Remember that the goal is to encourage people to try something new. If people try something and attribute improvement to their new behavior, by all means encourage them. Doing this doesn't preclude you from also providing some advice about how they might refine their new behavior to achieve even better outcomes.
  • Involve as many people as possible in the process of follow-up and the documentation of success stories. These people become part of the network and help spread the word.
  • Treat the people who transfer the training as the true experts on the training.
  • Don't get hung up on proving the cause and effect of how the new behavior improves performance. That so many past trainees are using it and consider it to be beneficial is proof enough. At the same time, if successful users are comfortable quantifying the benefits in terms of dollars, use that data to help sell the tool.
5. Coaching
The two previous post-training activities were not at all instructional in nature. The assumption was that at least some of the participants in training leave the training with what they need to successfully implement what they learned. It is not always the case that trainees get everything they need to be successful in training. In fact, in the specific case described, there were still many trainees who did not put the training into practice. A subset of that group didn't implement the training because they weren't convinced that they could be successful back on the job. A post-training activity that is appropriate in this case is coaching.

Coaching involves someone who is at least familiar with the concepts that have been trained following up with the trainee periodically to encourage, to provide a sounding board for the trainee while he or she begins experimenting with the concepts learned, and in some cases to provide specific advice for a successful implementation (Daniels 1995). Coaching is more resource-intensive than the two previous post-training activities. But coaching also works nicely in combination with them. Since the coaches will be very aware of who is and who is not having success putting the training into practice, they are in an excellent position to document success stories and facilitate the "user network."

When the opportunity exists for follow-up coaching, it is a good idea to clarify the role of the coaches during the actual training. It is also a good idea to have the trainees develop action plans. The participant's action plan is his or her personal commitment to put the training into practice and can also contain the steps to implementation and plans for where and when implementation will occur. When the trainees have created these plans, the coach is in the position to volunteer to help them meet their personal commitments. Without action plans in place, the coach could be perceived as someone who is calling to nag the trainee.

As for any effective coaching, the goal is to help the trainee successfully implement what they've learned with as little intervention as possible. Ultimately, it is in everyone's interest for the trainee to be able to use the tool without any assistance. Therefore the coach needs to be careful not to get involved in making decisions, defining actions, or implementing any part of the action plan himself or herself.

The coach needs to be familiar with the concepts that the trainee is trying to implement, but the coach does not have to be an expert in them. In most cases (unless what is being implemented involves tremendous risks), sound coaching skills are actually more important than having lots of experience with the concepts being implemented. Coaches have the advantage of learning through the experiences of each of their clients. Thus a novice coach will soon develop expertise, at least in his or her breadth of knowledge.

There are now whole companies that exist to provide the service of post-training coaching. Coaching skills are the core competence of these organizations. They are skilled at getting a team of coaches up-to-speed on any area of content and then helping the client organizations transfer what they've learned onto the job. These services may seem expensive at first. However, compared with keeping an army of coaches available within the training function, they probably are not. When evaluating the possibility of outsourcing coaching services, think in terms of the value to be gained if everyone who participated in the training was able to successfully implement what they learned back on the job. It is probably not an investment that should be made for all training, but in certain cases, it will pay for itself many times over, when transfer occurs.

Conclusions

This was a brief description of some of the pre- and post-training activities that are currently building momentum at Intel. It would be erroneous to say that any of these practices are widespread throughout Intel, or that the practices described are the only ones being used at Intel. But there is generally a trend at Intel to focus more of the aggregate training resource pool on pre- and post-training activities. Most of these activities are aimed, at least in part, at increasing transfer of training and the ultimate impact of training on performance. Intel is generating evidence that focusing on transfer and performance improvement is very worthwhile, and that pre- and post-training activities are crucial to this effort.

At Intel, and most other companies, as the pressure for organizations to become more competitive increases, it is no longer going to be acceptable for the training function to simply play the role of training provider. If the training function does not increase its focus on transfer of training and on performance improvement, it is very possible that other functions in the organization will move in to fill this niche. This would be a shame, because the training function in most organizations has a great deal to contribute to improved individual and organizational performance, not just to education and training.

The next several years may be critical for the training function in many organizations. Evidence that the training function is adapting to the direction just described can come from several sources. It may be evident in the kinds of questions that top management asks when it reviews the training function. It may become evident in the cost accounting for the training function, with new line items showing up to account for activities that do not fit in the practice of training as development and delivery. It may become evident in the day-to-day activities performed by the training function--the way that training professionals spend their time and the outputs they produce. It will probably be some combination of these sources.

However, if it does not become evident in one way or another, I predict that training cost accounting will begin to tell another story. There will be a time before long when top management no longer measures the health of the training function by the girth of the training budget (the more training the better). Instead, training effectiveness will be measured by looking at the cost and the value associated with training. If evidence of the value is not there, the training budgets and training professionals will suffer.

References

Center for Effective Performance, Inc. 1992. Proving the Value of Your Training Workshop.

Daniels, William R. 1995. Breakthrough Performance: Managing for Speed and Flexibility. Mill Valley: ACT Publishing.

Esque, Timm J. 1995. "Watching Tom Gilbert's Feet," Performance and Instruction, November/December, 34 (10).

Gilbert, Thomas F. 1996. Human Competence: Engineering Worthy Performance, Tribute edition. Washington, DC: ISPI; Amherst, MA: HRD Press.

Kirkpatrick, D. 1983. A Practical Guide for Supervisory Training and Development. 2nd ed. Reading, MA: Addison Wesley.

Lipnack, J., and Stamps, J. 1993. The Teamnet Factor: Bringing the Power of Boundary Crossing into the Heart of Your Business. Essex Junction, VT: Oliver Wight Publications.

Newstrom, J. W. 1985. "Leveraging Management Development Through the Management of Transfer," Journal of Management Development, 5 (5) pp. 33-44.

Copyright © 1997 by McGraw-Hill. All rights reserved.



CWL Publishing Enterprises
3010 Irvington Way
Madison, WI 53713-3414
phone: 608 273-3710
fax: 608 274-4554
Questions, Comments, Suggestions?
Email us! CWLPubEnt@execpc.com

Return to the CWL Training and Performance Resources Page