Chapter 8 Diagnosis and Feedback
The Board of Cooperative Educational Services (BOCES) in New York had hired a new school
superintendent. Early in his new role, the superintendent decided that he needed to focus on several
important constituencies, among them the BOCES administrative staff. To better understand and
address the concerns of the internal group, he hired consultants to conduct interviews and
observations of staff members. The staff concerns centered on issues of communicating common
goals, understanding the vision and direction of the organization, participation in decision making, and
teamwork. As a result of the diagnosis, the consultant, client, and staff all agreed that a series of
workshops and action planning sessions would be useful to improve teamwork, clarify goals, and
increase trust and engagement. The workshops were held as agreed and were rated as very effective
by attendees. The group seemed to be making progress.
Just as the engagement was to conclude, a hidden conflict became apparent through a confrontation
between the supervisor and the staff. Seven administrators sent a formal letter to the superintendent
expressing significant concerns about many issues that had not yet been discussed or addressed,
reporting barriers that inhibited them from being successful in their jobs. The initial diagnosis was
now questionable, and additional data were gathered. A second round of staff member interviews now
revealed deep conflicts about the superintendent’s agenda, and the staff reported that they lacked
confidence in the supervisor and his direction. The superintendent considered resigning. A second
workshop was conducted to clarify roles, improve conflict resolution skills, and develop trust between
team members and the superintendent so that open communication could occur. The consultants
recognized in retrospect that while the data hinted at the hidden conflict, the initial diagnosis only
showed part of the picture (Milstein & Smith, 1979).
Why do you think the conflict was not discovered in the data gathering phase?
What, if anything, could the consultants have done differently to make the conflict apparent
earlier?
To many organization development (OD) practitioners, there is nothing quite as
overwhelming as the volume of data that is generated from conducting interviews, focus
groups, surveys, observations, and collecting unobtrusive measures. Depending on the
length of the engagement, the size of the organization and data gathering effort, or the
magnitude of the problem, such data can easily amount to hundreds or even thousands of
pages of notes and reports. These notes may contain individual stories and
interpretations, vivid observations, and statistical data from surveys, each of which may
be consistent or contradictory with each other or the client’s or practitioner’s initial
interpretations. At this point, the practitioner is faced with the challenge of sorting through
it all to answer a deceptively simple question and discuss it with the client: “What is going
on here?” This is the objective of the diagnostic and feedback phases of the OD process.
In this chapter, we explore the purposes of the diagnostic and feedback phases and we
will discuss how consultants sort, analyze, and interpret data to arrive at conclusions that
can be fed back to the client. We will address what kinds of conclusions consultants reach
during the diagnostic phase, as well as how the feedback meeting should be conducted in
order to present and discuss the data most effectively. We will address client reactions to
feedback, including organizational member and client resistance and the consultant’s
response to resistance. Finally, we will address ethical issues in the phases of diagnosis
and feedback.
As you may have already noticed, while data gathering, diagnosis, and feedback are
separated here as distinct phases of the consulting process, they bleed and blend together
in most consulting engagements. This can happen when, for example, during data
gathering, preliminary conclusions (diagnoses) prompt the consultant to follow new data
gathering paths. Also, client feedback can offer a new interpretation on the data, which
can add depth and nuance to the diagnosis. Taken together, these analytic stages
represent the process of addressing a client’s presenting problem with a complete picture
of the underlying problem or situation so that the right intervention can be chosen and
structured in the most effective way possible.
Diagnosis: Discovery, Assessment, Analysis, and
Interpretation
While diagnosis is a common term among practitioners, it is unfortunate that it holds the
connotation of the doctor–patient consulting model described in Chapter 5. Some argue
that diagnosis presumes a sickness-based model where the organization is ill and it puts
the consultant in the position of being the all-knowing expert who will present the cure,
while others argue that a diagnostic stage fails to fully capture the common practitioner
role of helping a client successfully identify ways to reach a preferred future (opportunities
1164495 - SAGE Publications, Inc. (US) ©
to improve even when there are no explicit problems to be identified or a diagnosis to be
reached; Marshak, 2013a).
For this reason, some writers prefer terms such as discovery, engagement, and dialogue
(Block, 2011, p. 163); assessment (Franklin, 1995; Lawler, Nadler, & Cammann, 1980;
Noolan, 2006); or analysis and interpretation. Regardless of the label, practitioners agree
that the purpose of diagnosis is to “help an organization understand its behavior and its
present situation—what’s going on, how it’s going on—so that something can be done
about it” (Manzini, 1988, p. 148). Diagnosis is not only an informational activity, it is
aimed at generating action.
It is during the diagnosis and feedback phases that the consultant and the client explore a
more thorough and nuanced view of the problem, a view that has been only partial to the
client up to this point. This is because, as Argyris (1970) writes,
Organizations are composed of human beings whose behavior and attitudes are
influenced by the positions they occupy, the roles they play, the groups and
intergroups to which they belong, and their own personality factors. Thus, each
individual may see a problem differently. (pp. 156–157)
Different interpretations of problems exist depending on job roles, organizational
locations, individual experiences, and more. An executive will have one view of the
problem and its causes, and there will be yet another view by a middle manager, and still
another by a frontline employee. Showing the client how the problem can be viewed from
these multiple angles can mean that “consultants and clients understand and attack
causes of problems, rather than symptoms” (M. I. Harrison, 1994, p. 16). Indeed, Block
(2011) reports that managers turn to consultants because they have typically tried
unsuccessfully to solve a problem based on their own limited view of it, and that “the
consultant’s primary task is to present a fresh picture of what has been discovered. This is
70 percent of the contribution you have to make. Trust it” (p. 217).
Done well, the diagnostic and feedback processes can act as interventions on their own
and can motivate the client to take action to solve the underlying problem. Feyerherm and
Worley (2008) agree when they note that
assessment and provocative questions are such powerful interventions that
many OD processes are considered complete following an assessment because
the client sees the organization clearly (often for the first time) and can take the
actions necessary to improve system effectiveness. (p. 4)
Practitioners often make two common mistakes in the diagnosis phase. First, they take
diagnosis as an event rather than as a process. As the opening example illustrates,
diagnosis is not a one-time occurrence—organizations, situations, groups, and people
change. Additional data will surface and the picture will continue to build. Diagnosis is not
a conclusion to come to in one day, but a set of preliminary beliefs about what is generally
happening to be adapted as the organization changes. As the organization evolves, so
must the diagnosis.
Second, practitioners often make the mistake of single-handedly shouldering the burden
of diagnosis. Instead, diagnosis ideally would not be a process conducted by the change
agent alone. In fact, many OD practitioners prefer to involve clients or even client teams in
the diagnostic process. Bartee and Cheyunski (1977) state that diagnosis is most
successful when the practitioner can create a process for the client to participate in
developing an accurate set of conclusions. In this process, they write, “the client system
becomes the authority in determining what information is important to share,” and “the
clients immediately tend to own and take responsibility for the data generated” (p. 56).
Some advocate conducting workshops in which data can be presented and interpreted by
organizational members rather than by the practitioner alone (Bartee & Cheyunski, 1977;
Moates, Armenakis, Gregory, Albritton, & Feild, 2005). Following a low response rate on an
employee survey, for example, Moates and colleagues (2005) initiated a set of action
groups that were presented the findings from the survey and interviews. The groups were
asked to interpret the data, rate the importance of various themes, and develop ideas for
addressing the problems described. In doing so, the diagnostic and feedback process was
facilitated by the consultants, but the interpretation and choice of actions to take belonged
to the client organization. This may not be appropriate for some organizational cultures or
diagnostic subjects, but for many situations it is likely that such an approach would
increase the client’s trust in the outcome (M. I. Harrison & Shirom, 1999).
1164495 - SAGE Publications, Inc. (US) ©
The diagnostic phase consists of a number of interrelated activities, listed below. Each is
described in greater detail in the following sections:
1. Analyze the data, including sorting them into key themes. Obviously, handing a stack
of interview notes or completed survey forms to a client with no analysis is not
useful. Instead, the consultant must summarize and abstract key points from the
data. The consultant will look for common themes in the data and organize them in
a way that helps the client understand the problem.
2. Interpret the data. Interpreting means drawing conclusions that are supported by the
data. The consultant’s role is to present the facts as well as to facilitate
understanding and implications of the interpretations, beliefs, attitudes, opinions,
and inferences offered by organizational members.
3. Select and prioritize the right issues that will “energize” the client. Almost all data
gathering activities will produce a long list of issues, concerns, and contributing
problems, and some will be only minimally related to the current problem. Selecting
those that are most energizing will help the client to be motivated to focus on a
narrow set of issues to be addressed, implying a shorter list of actions. The issues
will not all have equal relevance or contribution to the problem, so the consultant can
help the client to see which issues may have higher impact or priority than others.
Finding Patterns by Analyzing Data
The objective of the analytic exercise for the organization development practitioner is to
reduce a large amount of data to a set of “manageable patterns which will help to
organize the problem into a useful conceptual map” (Argyris, 1970, p. 157). Data analysis,
as some social scientists have observed, can be a “mysterious” activity (Marshall &
Rossman, 1989) and an “open-ended and creative” act (Lofland & Lofland, 1995). It can be
perplexing to decide where to begin and how to tackle such an endeavor. Fortunately,
social scientific researchers who have coped with this problem for many years have
developed useful solutions, whether the data are quantitative (such as in surveys) or
qualitative (such as in interview notes). Though academic research projects and OD data
gathering programs have different objectives and audiences (Block, 2011), OD
practitioners can learn from and apply a great deal of social science research practices in
the data analysis stage.
Many practitioners struggle to get the data analysis “right,” as if there was (to use a
metaphor) a needle of a true answer buried in the data haystack. This results from a
misguided assumption that there is only one true interpretation. As some OD practitioners
put somewhat philosophically, “The question of what is ‘truth’ remains tentative and
subject to revision” (Massarik & Pei-Carpenter, 2002, p. 105). Data can be organized in any
number of ways, and the interpretation and analysis of the data are often inseparable
from the experience of the person doing the interpreting and sorting. This, in fact, is what
distinguishes academic research from OD practice—the practitioner’s judgment and
experience have a great deal to add to the creative and intuitive process of analyzing the
data (Block, 2011). Levinson (1994) agrees, adding that the “practitioner is his or her most
important instrument” (p. 27). In addition, despite the practitioner’s best efforts, it is
probably a healthier attitude to take and less stressful process to follow when the OD
practitioner admits to never being able to know as much about the organization as the
clients themselves (Schein, 1999). A more realistic outcome would be for the practitioner
to develop a set of data-based and data-supported preliminary conclusions that lead to a
useful conversation with a client who can ideally learn from the practitioner’s conclusions
and participate in developing appropriate actions.
Procedures for analyzing data are generally derived from two logical methods for
reasoning from data (Babbie, 1992). The first is a deductive process, in which the analyst
applies general principles or a theory to a particular circumstance or (set of)
observation(s). The second is an inductive process, in which the analyst reasons from the
observations or the data to elicit general principles or a theory. These methods can be
applied to data analysis for OD practitioners as well. A deductive process of data analysis
consists of using models or theories about organizations, organizational change, and
human behavior to help sort and interpret the data. An inductive process consists of
reading and sorting through the raw data to develop the key themes from them. Both are
common approaches, as is statistical analysis of surveys and questionnaires.
Deductive Analysis: Using Models for Diagnosis
One popular method of analyzing data is to use a diagnostic model. Particularly if a
model has been used to develop the data gathering approach, such as a survey or
interviews, using a model to analyze the data is a natural next step. Using a model has
several benefits (Burke, 1994):
1164495 - SAGE Publications, Inc. (US) ©
1. It makes coding data easier. Models present a finite number of categories into which
data can be sorted. With preestablished categories, the practitioner can more easily
sort interview comments into various groups.
2. It can help with data interpretation. The practitioner can notice which categories
contain more or fewer comments, or can notice which aspects of the model are overor underemphasized. Models also show relationships among categories that can be
used for action planning.
3. It can help to communicate with clients. Unlike lengthy theories or complicated
academic language, models are often graphic depictions that may be more easily
understood and that can more clearly direct a client’s attention to particular areas of
interest.
We have already explored a number of popular diagnostic models in this book. Weisbord’s
Six-Box Model, the Nadler-Tushman congruence model, and the Burke-Litwin model of
organizational performance and change (each of which is described in Chapter 4) have all
been used successfully in numerous OD engagements to diagnose problems and suggest
areas for attention. Each model differs in its choice of language and relationships, and
thus each offers something a little different for the practitioner to consider (Nadler, 1980).
One drawback is that these three are all models of whole organization functioning. Using
one of these models to analyze data generated from interviews on a team’s satisfaction
with how projects are assigned would not be very useful. Specific models such as those
developed for leadership or management (such as Likert’s four systems or Blake and
Mouton’s managerial grid, discussed in Chapter 2), employee engagement, or team
functioning might be more useful in some circumstances.
Burke (1994) gives an example of how this categorization process worked in one situation
in which he used Weisbord’s Six-Box Model to analyze data from interviews with eight
managers in a financial services company. He sorted the interviewees’ comments into
strengths and weaknesses by each of Weisbord’s six components, also labeling each
comment as part of the “formal” or “informal” system. When the data were categorized in
this way, he noticed that the informal system appeared to be stronger than the formal
system, particularly in the area of leadership, and that the category of purposes was
particularly weak compared to the others. An offsite meeting agenda was designed to
focus on goals, objectives, and strategies, as well as to build the formal leadership team
through relationships that had already been informally established.
In addition to these widely known models, many practitioners use their own models that
they have developed from their own experience. Burke (1994) points out that most of
these are not published, and that 100 different practitioners would produce “100 different
diagnostic models” (pp. 53–54). Thus, there is not always agreement about which model
is best for which situation, and the diversity of models can be both an asset and a
drawback because of the assumptions contained in models. As we discussed in Chapter 7
when we addressed the use of models in survey design, models can be constraining.
Systems theory is a popular diagnostic model, for example, yet it also focuses our
attention on formally structured organizational processes and neglects interpretive acts of
organizational members, which can be important in understanding many organizational
problems. The very benefit of models in helping us narrow and focus the data can also be
a drawback. Models can highlight attention to certain areas but allow us to overlook
others, often oversimplifying complex processes (Golembiewski, 2000c).
Another danger is that we may become overly dependent on the model so that we cannot
see connections or patterns ourselves without the model. A final note of caution about
using the deductive approach with a model is that while it lends itself to categorizing and
counting issues and comments, these do not necessarily represent the issues about which
there is the most energy or emotion. In other words, a few participants may feel very
strongly about one theme, which may also be very important to the current problem, and
others may feel less strongly about a more frequently mentioned theme. Simply stating
that five comments related to “leadership” may also not be instructive enough to take
action.
Inductive Analysis: Pulling Out Key Themes
Unlike deductive analysis, inductive analysis is done without a predetermined set of
categories. That is, the data analyst determines what the categories will be. One benefit of
an inductive approach is that the label for the categories can more closely align with the
language of organizational members. The categories can also be customized to the
project so that more or fewer categories can be used depending on how the consultant
wants to present the data. This approach can even lend itself to creating a model
specifically tailored to the client’s situation, showing interrelationships between categories,
topics, and organizational groups or members. Here is a short example of how this
approach can be used.
1164495 - SAGE Publications, Inc. (US) ©
Imagine that a client has engaged a consultant to determine why a team’s past three
major projects have missed their schedules, and that the following 10 comments are
derived from individual interviews with team members:
“Our project manager did not complete an accurate budget.”
“Management took too long to decide which proposal to accept.”
“We do not have the necessary systems in place for project managers to use.”
“Vacation schedules disrupted the work when team members were not available.”
“Management took members from the team for another critical project.”
“The schedule was inaccurate to begin with.”
“We do not pay enough to hire the most qualified people.”
“Project managers do not get paid for overtime.”
“Management changed the project scope halfway through the project.”
“We have no conference call capability to include remote team members.”
These data could be categorized into four areas:
Project planning (budget, scheduling)
Compensation/rewards (overtime, salaries)
Management (scope, resources, decision making)
Tools (systems, conference calls)
A second categorization system could be based on the focal person identified in the
comment (omitting two who do not fit this scheme):
Project managers (budgets, overtime, systems)
Managers (decision making, scope, moving resources)
Team members (vacation schedules, remote members)
Still a third method:
Financial systems (budgets, overtime, compensation)
Technology systems (conference calling, systems)
Human resource processes (vacation schedules)
Management processes (decision making, project scope, resource planning)
Any of these is an accurate categorization of the comments. In fact, a number of other
categories could be used as well, so determining how these comments get sorted would
depend on the consultant’s experience, the client’s preferences, and the organizational
culture. Ask yourself what the client will learn when you present the data each way. What
do you see in the three ways that this same list above is organized? Do you learn more
through one presentation than another? Does it accurately reflect the data or does it make
it look like one concern is more or less important than it deserves to be?
Note that each of the categorization schemes identified above reveals little about the
content of the data. The client cannot immediately tell from the heading “Project planning”
whether this is a strength or a weakness (or even a more subtle combination of the two)
from the data. To address this fact, some practitioners advise following an additional step
of formulating the labels or categories into sentences and theme headlines that explain
the content of the data more explicitly. For example, using the “Tools” category above, the
practitioner could write the following explanatory sentence and include quotes from the
data as supporting examples to illustrate the point:
Current tools may be insufficient to support the project teams:
“We do not have the necessary systems in place for project managers to
use.”
“We have no conference call capability to include remote team members.”
Learning data analysis of this type is best done through practice, and practitioners each
have their own preferred methods of coding data. Generally, most practitioners perform
inductive coding following these seven steps:
1. First, read through a good portion of the data again without taking notes. The
purpose of this is to become very familiar with the general data rather than to take
any action at this point.
2. Then, setting aside the data, write down several key ideas or themes that stand out
in what you have just read. This can be a challenge, and you may draw a blank, but
persevere. “Don’t be surprised if, after doing this, you are no better off than before.
You may have difficulty detecting particular trends. . . . Don’t be alarmed” (Manzini,
1988, pp. 77–78).
1164495 - SAGE Publications, Inc. (US) ©
3. Next, go back through the data again and ask yourself what this comment is trying
to say or what idea or concept it is an instance of. Give the comment the appropriate
label(s). In the example above, the comment “The schedule was inaccurate to begin
with” could be labeled as “project planning.”
4. Proceed through the list of comments, with each comment being placed into a
category (or more than one) already in place or becoming an instance of a new
category if it is unlike any that have been created so far.
5. When all comments are analyzed, sort them by category and validate that each
comment has been accurately placed. Look for categories where there are very few
comments and those where there is an abundance.
. Determine whether the “saturated” categories could or should be broken down
further and whether the “single-instance” comments are truly unique or whether they
could be combined with another category.
7. Write a description of what each category means and which categories may be
related to others.
As you may conclude from the explanation and examples above, an inductive analytic
approach can be time-consuming and even frustrating when data are plentiful. It requires
reading through the data numerous times to become very familiar with the issues, and it
demands flexibility and an open attitude from the practitioner. Because the categories do
not exist until the practitioner creates them, the inductive approach may require more
knowledge of the organization, the data, or organizational theory in general than the
deductive approach.
Two final notes of caution should be mentioned. First, because the analyst is generating
the categories, there is a danger of particular comments resonating with the practitioner’s
own opinions or beliefs (this may be particularly true for internal consultants) so that
certain comments or categories gain more weight than they should. A practitioner who has
had a bad experience with senior management on another project may be inadvertently
looking for comments about management to support his or her own view. On the other
hand, the practitioner’s own intuition is a powerful source of data, and listening to this
inner voice can be instructive in determining how to analyze the data. Second, careful
attention must be paid to the practitioner’s language choices in the words and phrases
used to summarize the categories. In the “tools” example above, the practitioner would
need to reflect on whether a description of the tools as “insufficient” is a fair
representation of what was heard in interviews. Would a stronger phrase than “may be”
more accurately reflect what was said? We will discuss ways to address these dilemmas
below.
Statistical Analysis
For surveys (especially those with Likert-type items) or sophisticated analyses of themes
generated through inductive or deductive categorization processes, statistical analysis can
be a powerful tool for data analysis. Statistical analysis is very common with
organizationwide employee satisfaction or engagement surveys, for example. Methods for
conducting statistical tests are much too complex to present here, and compared to
qualitative analysis, such quantitative procedures are probably necessary in fewer
circumstances. Most OD practitioners should be familiar enough with basic statistical
charts (e.g., frequency distribution tables, histograms, x- and y-axes, bar and line charts)
and descriptive statistics (e.g., mean, median, mode) to interpret their meaning and to
explain them to a client.
For the statistical research–oriented practitioner, powerful statistical tests can be very
persuasive ways of diagnosing an organization. Often, however, clients without the
research background of the analyst find these tests less persuasive. “Frequently, one sees
consultants or researchers piling volume upon volume of computer output onto an
overwhelmed, confused, but supposedly grateful client,” Nadler (1977, p. 149) writes. The
time and effort the practitioner may need to invest to explain the conclusions drawn from
an independent samples t-test or ANOVA may result in distracting the client from the
genuine issue that ought to be addressed. Too much statistical detail can also lead the
client to ask a number of purportedly “interesting questions” (“Are there statistically
significant differences in mean responses on Question 15 between sales and marketing?”)
that lead to additional statistical tests and thus permit the client to delay an action phase,
a potential form of resistance that is described later. The best approach may be to be
judicious with the use of statistics to include just enough to build a persuasive case
without overwhelming the client with too much data. Nadler recommends including a
small set of relevant data, simple enough for a nonexpert to interpret, with visual displays
to help condense the important facts.
Interpreting Data
1164495 - SAGE Publications, Inc. (US) ©
When the data are sorted into categories and reduced from the volume of raw comments
into the most prominent themes, the task remains to develop interpretations or inferences
from the data to determine what conclusions can be drawn. The practitioner can now ask
a number of questions about the data, such as those discussed when contracting with the
client: What are the strengths and weaknesses being described? What is the nature of the
problem according to the data? What are the contributing sources of the problem in the
data, and how do these differ from the client’s view? What is being done about the
problem today? Are there differences in the data according to any demographic variables
of interest?
In this process, it is easy to subconsciously slip from facts and data to inferences,
including some inferences that may not be supported by the data. Levinson (1994) offers
an instructive recommendation: “The practitioner should be able to cite the facts from
which the inferences were made, specify the alternative inferences that were possible, and
explain the reason for choosing one over another” (pp. 43–44). A conclusion that
“employees do not trust senior management” can be drawn from various facts reported in
interviews: employees feel punished for taking risks, they choose not to report bad
information, and they offer examples of managers’ promises left unfulfilled. Having the
facts available to support the inference helps to bolster the likelihood that it is a
reasonable interpretation, and it gives both clients and change agents more confidence in
the conclusion.
One reason why it is easy to make a dangerous slip to inferences not supported by the
data is that practitioners may draw conclusions based on their own experience, potentially
looking through data to back up a view they already hold. Kahn (2004), for example,
describes a powerful consulting experience he had in a social services agency in which,
because of his own personal family background, he began to side with organizational
members as they blamed the agency director for the organization’s problems.
Assumptions about leadership, power dynamics, follower behavior and responsibility, and
his personal sympathy with certain organizational roles all combined to push Kahn toward
a particular interpretation about who held responsibility for problems and away from
addressing sensitive issues including racial and gender dynamics.
There are many ways that practitioners can avoid this problem and develop more sound
conclusions. One way is to conduct the analysis multiple times in different ways. Resorting the data into new categories or using multiple models can validate the analysis or
offer alternative conclusions. Try both inductive and deductive analysis to see what
results. A second method is to invite a colleague who is not familiar with the data (if
confidentiality agreements with the client allow it) to either conduct a “second opinion”
analysis of the data or listen while the analyst describes the conclusions that have been
drawn. The second practitioner can test inferences to ensure that they are supported by
the data. A third method, perhaps the most obvious, is to work through the interpretive
process with the client. If multiple interpretations are possible, holding a dialogue with the
client can clarify which of these is most reasonable. Again, the consultant’s role is less
about bringing the right, true answer to the client and more about facilitating a learning
process that allows the client to explore an alternative picture and to have access to an
angle that has been missing to this point. Figure 8.1 summarizes these three options.
Figure 8.1 Avoiding Bias in Interpreting Data
1164495 - SAGE Publications, Inc. (US) ©
Selecting and Prioritizing Themes
With themes developed and preliminary conclusions drawn, the practitioner must consider
what issues should be brought up to the client. Not all of them will be germane to the
current engagement and not all can be addressed. Thus, a careful prioritization process is
necessary to “avoid presenting management with a huge laundry list of problems, which
may appear overwhelming in its totality. Isolate the truly significant problems and issues,
and prepare summaries that describe their nature and extent” (Manzini, 1988, p. 143). How
does a consultant decide, however, that employee complaints about workload should be
shared, but that complaints about the cafeteria food should be lower on the list? What
about the complaints of a small but emotional minority who bring up a highly charged
issue? Choosing what themes to select is a function of the problem, the data, the
engagement, the contract, and the consultant’s own experience and intuition.
Golembiewski (2000a) offers a useful set of “features of energizing data,” or those criteria
that, when met, are likely to energize the client toward action rather than deepening
cynicism or frustration that the issues seem unresolvable. Among them are the following:
1. Relevant. Issues shared with the client should be relevant to the problem for which
the consultant and client have contracted. The client is not likely to be interested in
issues that are remote or not among the most influential causes of the problem.
Cafeteria food may not be relevant to the team’s challenges in agreeing on a work
schedule that meets each team member’s needs.
2. Influenceable or manageable. It will build energy when the consultant presents
issues that the client can change. Sharing data about problems that the client
cannot control (“Two of your problems are that the price of oil is too high for your
budget, and you are in an industry that is declining at the rate of 11 percent per
year”) pushes responsibility for the current problem further away and drains the
client of energy to take action. Not only should the issue be manageable, but Block
(2011) recommends choosing themes on which people want to work.
3. Descriptive. The most useful data will describe current facts, rather than using
themes to judge, evaluate, pinpoint blame, or isolate individual contributions to
problems. Data that are evaluative and that punish are likely to be resisted. This
does not mean that emotion should be ignored. Feelings and opinions of
organizational members fall into the category of facts that can be described. “I
heard these three themes from my interviews” is an appropriate descriptive
statement that outlines the issues with a minimum of evaluation.
4. Selective. Not all of the themes in the data can or should be discussed. It is tempting
to present too much in order to expand the client’s picture of the situation, but such
efforts are likely to overwhelm the client, who may then have trouble deciding what
to do next. Choosing the top few issues to present will focus attention on those that
are the most important, leaving out those issues mentioned less frequently.
5. Sufficient and specific. The tradeoff of selectivity, Golembiewski (2000a) writes, is
that too little information may be presented, which may not be enough to fully
understand the situation. Enough detail should be provided so that the client can
consider specific actions to take. Using the example earlier, stating that
1164495 - SAGE Publications, Inc. (US) ©
“management issues” are a key reason why projects are behind schedule may be
accurate, but without additional explanation, it is also too vague to decide what
action would remedy the problem. Also, issues selected should be described in
relatively equal conceptual categories to avoid very specific feedback such as
“Employees would prefer that you answer voice mail within 4 hours” mixing with
more general descriptions such as “Employees feel that you do not manage change
well.”
Figure 8.2 summarizes these five guiding principles for selecting and prioritizing themes to
share with a client.
Feedback
Figure 8.2 Selecting and Prioritizing Themes
After the data are analyzed, sorted, and selected for the degree to which they will energize
action, the practitioner shares these findings with the client. This can take the form of a
written feedback report, a feedback meeting, or most likely both. How and when the
feedback is presented is best discussed as a part of the contracting process (Noolan,
2006). As a practical matter, creating a report and sending it to a client is not a very
difficult process. As Argyris (1970) writes,
If the basic objective of feedback is simply to offer a clear presentation of results,
the interventionist need only develop a well-written, well-bound, easily understood
report . . . and select a capable lecturer who will tend to prevent difficulties from
developing. (p. 322)
The feedback meeting has a more important purpose, however, in engaging the client in
doing something different, which is why Block (2011) prefers to call it a “Meeting for
Action” (p. 229). During the transition from data to action, the change agent’s presentation
of the data can put even the most willing client at a crossroads of what may be a difficult
and even personally painful process.
Feedback to a client, even if the client has invited it, can be a sensitive process with high
anxiety for both the consultant and the client. The client has exposed the organization to
the consultant, and as a part of the system, the client is implicated in its problems. Even if
the data are not about the client but about the organization in general, the client may see
the feedback meeting as a personal and professional evaluation of competence. Because
the objective of data gathering is often to develop a description of the problem that
enhances what is understood by the client, the tacit implication in a feedback session may
be that the client did not adequately understand the problem, and such a situation can
produce a “sense of inadequacy [that] can arouse anxieties and guilt feelings” (Argyris,
1970, p. 323). The client may become defensive and resist the data in ways we will
1164495 - SAGE Publications, Inc. (US) ©
describe later. To maximize the likelihood that the client will receive and understand the
feedback and be motivated to take action, the feedback meeting must be set up for
success by creating an environment in which exploration of issues, learning, and action
planning are possible.
Feedback has both motivating and directing functions (Nadler, 1977). Feedback motivates
action when it is inconsistent with what is already believed, so that it produces a level of
discomfort and the recipient is prompted to take different actions to reach different
results, a process Nadler calls disconfirmation. Feedback also directs attention to the right
set of actions that will produce better results, a process Nadler calls cueing.
Nadler (1977) writes that for change to occur based on feedback, the feedback itself and
the process must create energy. The client may have energy to fight the feedback, in which
case the practitioner will notice resistance. If the client has energy to work on the
feedback, there may still be no change if the client works on the wrong issues or if there
are not supportive processes to help the change be successful. The practitioner’s charge is
to help create energy with the client and direct that energy to the most appropriate areas
for change. As we have seen, not all feedback will motivate or direct change. The data will
not produce energy if they violate the characteristics described earlier. If the information is
not seen as relevant, specific, or sufficient, for example, the client will resist it.
The five principles above for selecting themes to present are also characteristics of
effective feedback. In addition, feedback should be seen as verifiable. Nadler (1977) writes
that “people will respond more to data that they feel are valid and accurate,” meaning that
clients will be more likely to have faith in the feedback if they also believe that “the data
are truly representative of what goes on in the organization” (p. 147). The feedback should
be framed as unfinalized. That is, this discussion of the data gathered is not a permanent
state or condition, and the practitioner is not there to offer an expert judgment written in
stone. Instead, as Nadler puts it,
successful use of feedback usually involves using the data as a starting point for
further exploration rather than as an ending point. . . . In a meeting, for example,
the best and most descriptive data are in the heads of the people around the
table, not on the feedback sheet of the consultant. For feedback to be effective,
the formal data should serve only as a starting point for more in-depth data
collection, problem identification, and problem solving. (p. 148)
This idea is at the heart of a feedback meeting, where the client and consultant should
jointly collaborate on a discussion of the data, using the data gathered as a foundation for
dialogue.
Structuring the Feedback Meeting
Organization development practitioners should consider how feedback meetings can be
set up for success even before they begin. The practitioner should think carefully about
the environment in which the meeting will be held and whether it is conducive to learning,
exploration, and dialogue in a confidential setting. If the client’s office is a desk in an open
room in which other organizational members frequently walk in and out, the client will not
be focused enough to hear the feedback and confidentiality may be violated. Other
common interruptions in a client’s office such as a ringing telephone, frequent visitors, and
the immediacy of incoming e-mails all can be distracting. For this reason, it may be useful
to choose a different locale. Also, enough time should be dedicated to the meeting so that
the consultant can review the contract, the data, details behind the data, and discuss
actions or interventions. Too little time will not allow the issues to be explored fully in a
single meeting. Even seating matters as well. Sitting across from the client can create an
adversarial environment where client and consultant seem to be working against one
another. Sitting at a round table or side by side looking at notes or a presentation may
create a more collaborative environment.
Experienced practitioners have developed their own preferences for how to structure the
feedback process. Levinson (1994), for example, recommends that the consultant and
client agree to set aside the last 2 hours of one workday and the first 2 hours of the next
morning for review and discussion. In the first session, he reads the report to the client
aloud, and only then gives the client a copy of the report for review in the evening. The
following morning, the client, after having read the report alone and considered it, works
with the practitioner to clarify the issues and to develop action plans. Block (2011) takes a
different approach, quickly reviewing the original contract and presenting findings, with
the bulk of the meeting dedicated to exploring client reactions, discussing
recommendations, and deciding what actions to take next. He argues that most
practitioners spend too much time on the data and themes so that little time remains to
1164495 - SAGE Publications, Inc. (US) ©
explore how the client feels about the data and what should be done with the feedback. To
take yet a third approach, Argyris (1970) refuses to develop recommendations on the
grounds that the clients must own the feedback as well as the recommendations. He
argues that providing recommendations robs the client of the opportunity to explore and
internalize the issues first.
Giving an early copy of the feedback to the client can be a double-edged sword. On the
one hand, the client has the opportunity to review it before a personal meeting and may be
better prepared to accept particularly difficult feedback after a few days of absorbing its
meaning and implications. On the other hand, action-oriented clients may be tempted to
act hastily on the feedback before discussing the data with the practitioner and may act
on a misunderstanding or emotional reaction. The practitioner alone can make this
determination based on personal preference, experience, and knowledge of the client.
Presenting Data in the Feedback Meeting
The discussion in the feedback meeting presents several opportunities to increase or
decrease client acceptance of the data. Here are a number of recommendations for
managing the feedback session and presenting the data:
Even though the client may see it as ritualistic, begin with positive data. Encourage
the client to accept and appreciate the organization’s strengths. This is a fine
approach as long as the feedback is genuine and authentic. To some clients, this
can feel like being set up before being hit with the bad news. Understanding
strengths can be especially helpful when the client can use them to compensate for
or to address weaknesses.
Ensure that the themes described in the feedback report provide enough detail to be
accurately defined and useful. Some consultants pick one or two representative
quotes from interviews as an explanation of a particular theme. One caution in using
this approach is that quotes can contain identifying information that may violate an
interviewee’s anonymity. Beyond the simple category label “improve team meetings,”
the practitioner should be able to define what interviewees wanted improved.
Quantitative data can illustrate trends and how widespread agreement is across the
organization, but they can also provide too much detail for clients. A healthy
sprinkling of statistics may be all that is necessary to illuminate the trends.
Language choices are important in presenting the data. Using nonevaluative
descriptive language such as “employees mentioned that decision making on the
executive team seemed slow” will be feedback that is more likely to be accepted and
acted upon than “you are a slow decision maker.” Also report “the specific impact on
self or other’s behavior, attitude, or value” (Golembiewski, 1979a, p. 65) so that it is
not just the behavior that is understood, but the implications of it as well.
A common problem is that the practitioner will project his or her own feelings on the
client (Block, 2011; Scott, 2000). The practitioner may assume that the client will be
angry, hurt, or embarrassed, because that is how the consultant might feel in the
same situation. Consequently the practitioner makes assumptions about the client’s
feelings instead of focusing on the facts.
“Be willing to confront the tough issues” (Scott, 2000, p. 149). However painful it
may be for the consultant to say or for the client to hear, it is necessary to share.
Avoid minimizing the feedback in order to create a harmonious relationship or to
avoid upsetting the client (for example, by saying “I’ve seen many managers have
this same problem” or “I do the same thing myself”). Such statements water down
the significance of the message and encourage the client not to take it seriously.
Resistance
Listen carefully to managers, change agents, and OD practitioners talk about reactions to
change, and you will likely hear variations of the statements “no one likes change” and
“people resist change.” Kotter and Schlesinger (2008) write that “organizational change
efforts often run into some form of human resistance” (p. 131), and the causes they cite
include parochial self-interest, misunderstanding and lack of trust, different assessments
of the costs or benefits of the change, and low tolerance for change. O’Toole (1995)
provides a list of 33 reasons that lie at the heart of resistance to change, including fear of
the unknown, fatigue over too much change, cynicism that change is possible, and a
desire to keep the status quo and one’s comfortable habits. Resistance is commonly seen
as a major barrier between the change agent and successful implementation of the
change, manifested in behaviors such as ‘‘‘push-back,’ ‘not buying in,’ ‘criticism,’ ‘foot
dragging,’ ‘workarounds,’ . . . not responding to requests in a timely manner, making critical
or negative comments” (J. D. Ford & Ford, 2010, p. 24), and other sabotaging actions.
Scholarly attention to resistance has resulted in hundreds of research articles examining
antecedent factors, contextual factors, personality traits, and more (see Oreg, Vakola, &
1164495 - SAGE Publications, Inc. (US) ©
Armenakis, 2011, for a review) to recommend approaches to organizational change that
will minimize resistance and enhance organizational members’ acceptance of change. In
general, it seems, resistance is framed as a primary reason why change attempts fail
(Erwin & Garman, 2010).
As a result, practitioners commonly work at “overcoming resistance,” a phrase that has a
long history in organization development, perhaps first articulated in an article early in the
history of the field (Coch & French, 1948). Managers and change agents strategize about
the best ways to succeed in the face of opposition and search for the approaches,
activities, and strategies that will get employees to drop their resistance and embrace the
change. Kotter and Schlesinger (2008) advise managers to take a variety of actions on a
continuum from educating members and inviting participation to manipulating and
coercing them. However, the underlying view and mental model we have of resistance may
not be the most helpful in our work with clients and their organizations. In this section we
will look at how OD practitioners can benefit from rethinking the concept of resistance and
how we can work skillfully with clients who express it in order to encourage more
productive dialogues about our change projects.
Thinking Differently About Resistance
When you hear the phrase resistance to change, what images come to mind? Some
authors writing about this phrase point out that it may have outlived its usefulness. Dent
and Goldberg (1999) reviewed the evolution of its meaning since its inception, noting that
“the phrase overcoming resistance to change implicitly suggests that the source of the
problem is solely within the subordinates and that the supervisor or more senior executive
must overcome this unnatural reaction” (p. 37). The connotation we have of resistance to
change is of a recalcitrant, disobedient, and irrational employee audience actively or
passively working to oppose the justified, beneficial, and helpful actions of a rational
change agent. Resistance is seen as existing only in “them,” not as a function of any
actions of the change agent, and it is almost universally seen as negative. There are
several reasons why a more nuanced view of the concept might be beneficial as we
propose changes to our clients.
There is little agreement about a definition or set of behaviors that universally count as
resistant. Jeffrey and Laurie Ford (2010) share an example of three project managers who
delivered presentations to team members about a change. Two project managers received
many questions about the change and the third received no questions at all. The latter
reported being “stonewalled by silence” (p. 25). Among the two who had received many
challenging questions about the change, one said that he had been “interrogated” by a
resistant group, and the other called it an “energizing meeting” with an engaged group.
“Two opposite behaviors, asking questions and not asking questions, were both perceived
as resistance. When two groups did the same thing, ask questions, their behavior was
perceived in opposite ways, as either resistance or engagement” (p. 25). It may be that
resistance is not a universally interpreted phenomenon, but it may be that the actions and
interpretations of the change agent have a part to play in what we call resistance.
Moreover, a more nuanced view of resistance suggests that reactions to change are not
easily sorted into buckets of “for the change” and “against the change,” or categories of
“supportive” and “resistant.” Our attitudes exist along a continuum and sometimes the
same change can produce both positive and negative attitudes and behaviors. A member
of the accounting team may be supportive of implementing a new software system that
generates easy-to-read reports, but she may dislike the requirement that all data be
submitted by noon on Friday. She may think that the new system will save time and
enthusiastically discuss this point with others to persuade them of the benefits of the
change, but she may also refuse to submit data on schedule. Such examples show how
apparently inconsistent opinions about a single change are possible. Piderit (2000)
advises us also to consider the idea that organizational members may be ambivalent to
change, recognizing that in complex organizations, changes are complex as well and may
have multiple components. These complex changes might produce multidimensional
beliefs about the change (a cognitive dimension, or “what I think”), attitudes toward the
change (an affective dimension, or “how I feel”), and actions toward the change (a
behavioral dimension, or “how I act”). In this view, reactions to change are better seen as a
mix of beliefs, emotions, and behaviors, each of which can vary on a scale.
Jeffrey Ford, Laurie Ford, and Angelo D’Amelio (2008) write that there are three ways that
OD practitioners should rethink our model of resistance and take different actions as a
result of this shift in our perspective:
1. We should remember that resistance is a label we apply based on our own
interpretation. Rarely, if ever, do organizational members stand up to announce, “I
am going to be resistant now.” Instead, they may seek out answers, ask questions,
1164495 - SAGE Publications, Inc. (US) ©
share opinions, and otherwise act in ways that may be interpreted by a change agent
as resistance. When change agents enter a conversation assuming that there will be
resistance, they may find what they expect to find, even labeling as resistance
ordinary behaviors that in another context may not be considered resistant (e.g.,
responding late to an e-mail, sitting in the back of the audience). Change agents who
are later asked to account for the failure of a change program will “take credit for
successful changes and blame other factors, such as resistance, for problems and
failures” (J. D. Ford et al., 2008, p. 364). This justification not only minimizes the
change agent’s responsibility but serves to reinforce our commonly held notions that
change failures are due to resistance. Consequently, it may be valuable to set aside
the label and instead approach interactions with change recipients as a conversation
opportunity that exists in a change agent–recipient relationship, allowing others the
complexity of their own perspective and accepting our own role in the change
process.
2. As change agents, we may be contributing to the very resistance we are trying to
avoid, through misrepresentation, inauthentic behavior, or ambivalence of our own.
We may ignore the dual advantages and disadvantages of a change and promote
only the positives, offering an intentional or unintended misrepresentation of the
change. Change agents must be authentic about the rationale for the change and
accurate about assessments of it. When change agents hide or avoid discussing a
change in order to avoid eliciting resistance from organizational members (or avoid
discussing resistance itself), they may unwittingly encourage resistance from
members who see through the inauthentic behavior. In addition, change agents may
promote ambivalence of their own: In a study of management communication at an
aerospace technology company, Larson and Tompkins (2005) noted that “what
appears as employee resistance to management initiated change . . . might also
reflect the subtle and not so subtle ambivalence of managers” (p. 17) who
inconsistently promoted both a change and past practices. Employees may see little
need to change when they hear multiple inconsistent or competing messages.
3. We should recognize that resistance can be a useful resource to us in our change
projects. In fact, there are a number of benefits of resistance (J. D. Ford & Ford, 2010;
J. D. Ford et al., 2008):
Resistance can clarify purpose. Resistance to the need for a change can
prompt a change leader to more clearly articulate the rationale for the change
in a way that resonates with recipients. Piderit (2000) notes that “divergent
opinions about direction are necessary in order for groups to make wise
decisions and for organizations to change effectively” (p. 790). Conversations
about purpose can test that the right problem is being solved or ensure that
there is agreement about the nature of the problem to be solved and the need
for the change.
Resistance can keep the change active in the organization’s conversations.
Among all of the other projects, programs, initiatives, problems, and
challenges, a new change has a number of competitors to gain employees’
attention. Instead of letting the change conversation die from lack of interest,
resistance offers at the very least the opportunity to keep the conversation
going. Even “the honest expression of ambivalence seems more likely to
generate dialogue than the expression of either determined opposition or firm
support” (Piderit, 2000, p. 790), offering evidence that ongoing debate and
discussion are likely to be valuable as organizational members struggle with
what the change means and how they feel about it.
Resistance can enhance the quality of the change and its implementation.
Resistance is often a reaction from those who genuinely care and may know a
great deal about the change being attempted. By “listening keenly to
comments, complaints, and criticisms for cues to adjust the pace, scope or
sequencing of change and/or its implementation” (J. D. Ford et al., 2008, p.
369), recipients’ feedback, positive and negative, can be taken into account to
create a better solution.
Resistance can provide additional data about how employees feel about past
change attempts and the organization itself. Change agents who hear “we’ve
tried that and it failed” or “why should we trust you this time?” may overlook
the organization’s past history of change attempts or underlying employee
concerns about management’s past behavior. In fact, Bordia, Restubog,
Jimmieson, and Irmer (2011) found that negative beliefs about past changes
were significantly related to lowered trust in the organization and a higher level
of cynicism about organizational change. Resistant messages like these can
reveal a great deal more about the general state of the employee population.
Resistance can reflect, and potentially build, involvement in and commitment
to the organization. Organizational members who are protective of their
organization and dedicated to its success may be correspondingly concerned
with changes that they might see as threatening. In fact, this resistance can
1164495 - SAGE Publications, Inc. (US) ©
serve a useful function: “In a world with absolutely no resistance, no change
would stick, and recipients would completely accept the advocacy of all
messages received, including those detrimental to the organization” (J. D. Ford
et al., 2008, p. 370). Thus, resistance can serve an organizationally protective
purpose.
When we label change recipients as resistant and ignore or otherwise circumvent their
criticisms and suggestions, we fail to see the value of their input and use it for the benefit
of the change. Instead, when we see resistance as a natural part of an ongoing
conversation with organizational members who we want to engage in our change
proposals, we take seriously the multifaceted nature of beliefs about change, we increase
involvement, and we realistically appraise our own role in the change process.
Working With Client Resistance
With clients, the diagnostic process and the feedback meeting can surface negative
feelings about change that the client may be unable or unwilling to address, including
feelings of loss of control that are common when one’s ideas and beliefs are confronted
by new data. A client’s resistance to the data and to taking action is a natural part of the
process and is commonly expressed in feedback settings. Resistance can also be
frustrating for the change agent, who may harbor a fantasy that “if our thinking is clear
and logical, our wording eloquent, and our convictions solid, the strength of our arguments
will carry the day” (Block, 2011, p. 21), but just as organizational problems are often both
technical (e.g., the strategy is unclear, process delays cause quality problems) and
personal (e.g., employees lack motivation), a client’s reaction to feedback is often both
rational and emotional. No matter how good the data collection and thematic analysis, an
emotional response is likely. Resistance is a reaction to the emotions being brought up by
uncertainty and fear. As we have seen, however, resistance is not always negative. It may
also be a healthy coping and protective mechanism, as change can threaten the status
quo and the ability for the organization to achieve its objectives (Gallagher, Joseph, &
Park, 2002). Recognizing resistance is an important skill, and the ability to work with it is
an even more advanced skill.
A comprehensive description of client resistance is presented by Block (2011), who
describes the following 14 ways that it may be expressed by clients:
1. Give me more detail. The client continually asks for more data, more descriptions,
and more information. Even when presented with the extra facts, it is not enough
and even more is desired.
2. Flood you with detail. The client spends most of the meeting talking, providing
history, background, and commentary on not only the immediate situation but
tangential issues, too.
3. Time. Resistance is expressed as a complaint about not having enough time to
complete the project, to gather data, to meet to discuss the diagnosis and feedback,
or to plan the intervention.
4. Impracticality. The client complains that the solutions are not practical or feasible in
this group, this division, this company, this industry, and so on. This may be
expressed as a complaint about what works in “theory” versus what will work “here.”
5. I’m not surprised. The client accepts the feedback and diagnosis with agreement,
nodding that it makes perfect sense. “Of course that is what is happening; it is what I
knew all along,” the client seems to be saying, avoiding the discomfort that can arise
by being confronted with new information.
. Attack. Direct attack is a clear form of resistance, when the client expresses anger,
frustration, and irritation through a raised voice and angry words. It is among the
easiest to recognize because it is the most explicit.
7. Confusion. Much like a desire for more information, the client wants another
explanation, expressed in a different way. Then this explanation seems unclear and
another is requested.
. Silence. The client remains silent during the entire presentation, and the consultant
may be tempted to keep pressing forward until the client speaks up. If confronted,
the client may say that the presentation is “fine” or “good
1164495 - SAGE Publications, Inc. (US) ©
Chapter 7 Data Gathering
Promotion, Inc., is a privately held company in the Midwest that serves the direct mail industry with
printing and mailing services. A junior organization development consultant agreed to conduct an
employee survey, specifically with the Mail Division, to determine why turnover rates were much higher
than in other divisions within the company. An internal committee developed 16 possible causes of the
turnover based on interviews with 20 employees. The 102-item questionnaire (which included a
separate page of demographic data as well) was organized into 11 categories, and the survey was
pilot tested with a small group of employees and revised based on their feedback. In the end, all 480
employees in the division were sent a survey in order to ensure that no employee was omitted and that
employees could remain anonymous. The results of the survey held negative feedback for
management about roles between departments, work policies, and employee compensation. Results
of the survey were presented to management in five separate sessions, beginning with top
management and continuing with the internal committee and Mail Division managers. Results took
many by surprise, and some managers walked out of the feedback sessions. Managers were
unwilling to take action based on the feedback and decided to shelve the reports. In the end,
employees were provided with only a brief and highly edited version of the feedback report almost 2
months after the survey was administered (Swanson & Zuber, 1996).
What do you think was done well in the administration of the survey in this case? What do you
think should have been done differently?
Was a survey a good choice for a data gathering method in this case? Why or why not?
With a formal and psychological contract successfully established, a data gathering
strategy is developed to further explore the causes and consequences of the problem
described by the client. Using methods such as interviews, focus groups, surveys,
observations, and unobtrusive measures, the consultant can develop a nuanced and
detailed understanding of the situation so that interventions developed can be both
applicable and more effective. In this chapter we discuss the methods of data gathering
used by organization development (OD) consultants and describe how consultants
choose among them to formulate a data gathering strategy.
To consultants and clients alike, spending time gathering additional data can seem like a
trivial and costly exercise. After all, the client has at this point already seen many
instances of the problem being discussed and likely has been able to describe it in detail.
This can be a troublesome view to maintain, however. Clients see one angle of a problem
from one perspective, and additional perspectives can add useful insights that cannot be
seen without additional data. Gathering data and presenting the information back to the
client can present a more complete picture of the organization and expand both the
client’s and practitioner’s knowledge. It is an intervention in itself, and in many cases it is
the most powerful intervention that a consultant can execute.
The Importance of Data Gathering
Comprehensive data gathering takes time, however, and “many managers and consultants
show a bias for quick intervention and shortchange diagnosis” (M. I. Harrison & Shirom,
1999, p. 8). In an attempt to quickly solve the problem that may have existed for quite
some time, both managers and consultants are tempted to take a shortcut through the
data gathering process, assuming that the information available is sufficient. However,
“managers and other decision makers run serious risks if they eschew diagnostic inquiry
and systematic decision making altogether when uncertainty and pressure for quick
action intensify” (p. 9). Despite these warnings, speed frequently trumps accurate data
and careful diagnosis.
Nadler (1977) writes that there are three reasons why consultants should take data
gathering seriously. First, good data collection generates “information about
1164495 - SAGE Publications, Inc. (US) ©
organizational functioning, effectiveness, and health” (p. 105). Argyris (1970) explains:
“Without valid information it would be difficult for the client to learn and for the
interventionist to help. . . . Valid information is that which describes the factors, plus their
interrelationships, that create the problem for the client system” (p. 17). Good data
collection should expand the practitioner’s and client’s knowledge of the problem.
Second, data collection can be a force that can spark interest in change. It can bring
organizational members together on a common definition of the situation that they can
then agree to change. Nadler (1977) writes that in this respect, “Collection can be used for
consciousness raising—getting people thinking about issues concerning them and the
organization” (p. 105).
Finally, practitioners who do data collection well can “continue the process of relationshipbuilding between the change agent, his or her internal partners, and the organization”
(Nadler, 1977, p. 106). The change agent has the opportunity to meet organizational
members, demonstrate empathy and credibility by focusing on individuals and their
perspectives, and develop cooperative and trusting relationships so that the practitioner
can help the organization to change.
Presenting Problems and Underlying Problems
In initial meetings with practitioners, clients describe presenting problems. Presenting
problems are those initial explanations of the situation that highlight symptoms of which
the client is most painfully aware. Beneath presenting problems lie underlying problems.
Underlying problems can be described as the root cause or core, fundamental issues that
are producing the symptoms. Interventions designed to address presenting problems but
that do not address underlying problems are likely to produce only a short-term, negligible
impact. These interventions are commonly the “simple fix” that clients may actually prefer.
After all, they usually match the client’s framing of the issues, they are frequently easier to
address, and they often involve process issues or other task-oriented changes that avoid
personal change or interpersonal conflict. Unfortunately, they rarely solve the underlying
problem contributing to the surface-level symptoms that are easier to see.
For example, Block (2001) describes several common presenting problems framed by
clients that consultants may erroneously try to fix. If a client wants more cooperation from
a team, the change agent may try to get all relevant parties in a room, discuss their
objectives, discuss working relationships, and agree on communication patterns.
Reframing the problem as one of territory between groups changes the problem from the
lack of cooperation to a negotiation of boundaries and group identity, where each
individual or group may need to give up something for the good of the larger organization.
Another common example concerns training: A client sees poor results and wants more
training or education for organizational members. The client may not see that there are
process and motivational barriers inhibiting organizational members from acting. If a
consultant acts on a request for training without understanding why organizational
members act in the way they do, the consultant will take up time and resources developing
a training program that may have very little to do with why results are not being achieved.
The point is that without greater detail as to the nature and extent of the problem from
different perspectives, the chosen interventions may target the wrong areas, and they may
even deepen conflict and frustrate organizational members. Presenting problems are an
initial place to start, but the practitioner’s real concern must be with the underlying
problems. These can best be explored through data gathering.
Data Gathering Process
Noolan (2006) recommends a five-step process for data gathering:
1164495 - SAGE Publications, Inc. (US) ©
1. Determine approach to be used. Each method of data gathering has advantages and
disadvantages. Based on the client’s description of the problem, the consultant
should determine what data should be collected and why.
2. Announce project. The client or another representative should explain to
organizational members what data are being gathered, by whom, using what
methods, and for what purposes.
3. Prepare for data collection. Surveys or interview guides should be prepared, along
with a list of potential interviewees. Interviewees should be contacted and a time and
place scheduled.
4. Collect data. Use appropriate protocols, depending on the data gathering approach
selected.
5. Do data analysis and presentation. Practitioners may choose to use one or more
diagnostic models to analyze the data and give feedback to the client. (This part of
the process is discussed in the next chapter.)
This approach can vary somewhat, with different considerations for successful data
gathering, depending on the data gathering method used. Each approach requires a
different length of time for gathering information and analyzing it, a different level of
expense, and a different investment on the part of organizational members, clients, and
consultants. Some approaches, such as interviewing, can have a psychological effect on
the organization, whereas unobtrusive measures generally occur behind the scenes
without much fanfare or publicity. In this chapter we expand on the specific details for
each step of the data gathering process for each method.
Data Gathering Methods
Organization development practitioners use five common methods of data gathering to
explore presenting problems. In the following sections, we explore these methods,
including why practitioners use that approach, what advantages or disadvantages each
approach presents, and what pitfalls or potential problems practitioners can experience in
using each approach. We also explore tips for successful data gathering using each
approach:
1. Interviews
2. Focus groups
3. Surveys/questionnaires
4. Observations
5. Unobtrusive measures
Interviews
Interviews are generally one-on-one meetings during which practitioners speak directly
with individual organizational members. The practitioner is interested in the individual
stories and perspectives of organizational members and in a personal setting can explore
their history, experiences, beliefs, and attitudes in depth. Seidman (2006) writes that “at
the root of in-depth interviewing is an interest in understanding the lived experience of
other people and the meaning they make of that experience” (p. 9). The primary
advantages of interviewing as a method for data gathering include the ability to
understand a person’s experience and to follow up on areas of interest. Interviews can
yield surprises that a practitioner may not know enough in advance to ask about. In many
cases, interviews can be the only choice in getting at specific issues, such as employee
experiences with a manager of a small group or a conflict between two managers. In these
cases, employees may be unlikely to respond to a written questionnaire with adequate
detail to truly understand the problem, and it may not be possible to witness the situation
personally through observations. Even if a practitioner could see the situation personally,
interviews can allow the practitioner to better understand how organizational members
interpret a situation or what attitudes and beliefs they have about it.
1164495 - SAGE Publications, Inc. (US) ©
Data gathering through interviews relies heavily on cooperation from organizational
members who will only open up to discuss serious issues if they trust the interviewer
(Seidman, 2006). Interviews can be threatening, as members may feel defensive if they are
personally involved in a problem and they may be motivated to stretch the truth to present
themselves in a positive light. Consequently, among the five data gathering methods,
interviewing requires the greatest interpersonal skill of OD practitioners. Interviewers
should be adept at placing people at ease in a one-on-one situation, excellent listeners,
and skilled conversationalists. Interviews can generate a tremendous amount of data, with
organizational members sharing stories, examples, and personal beliefs, including issues
relevant to the issue the practitioner is investigating and those tangential to it. These data
can be difficult and time-consuming for the practitioner to sort through after the interviews,
and they may suffer from the practitioner’s or client’s biases or interest in seeing certain
facts that may not be as apparent as the practitioner wants to believe.
To conduct data gathering successfully using interviews, an interviewer should follow
these guidelines:
1. Prepare an interview guide. Interviews can be formal and structured, with each
interviewee asked the exact same set of questions without straying from the list of
questions, or they can be semistructured, with an interview guide containing a
general list of open-ended questions addressing the major topics of the interview.
Open-ended questions are those that typically require the participant to provide more
detailed answers, whereas closed-ended questions can be answered with a word or
short phrase. It is usually a better choice to ask a respondent “How would you
describe your relationship with the members of this team?” than “Do you feel there is
too much conflict on this team?” which tends to suggest an area of the interviewer’s
interest rather than the interviewee’s. With semistructured interviews, the interviewer
adds probes, or follow-up questions, where appropriate, and can explore other areas
that were not predicted in the interview guide. Follow-up probes can include
questions such as “Why do you think that is true?” or “Can you give an example?”
Most OD interviews are semistructured.
2. Select participants. When only a small team is involved, interviewing every team
member is a reasonable approach. However, because interviewing can be timeintensive and resource-consuming for both the organization and the interviewer, it
may not be possible to interview every relevant organizational member. For example,
to gather data from employees on a manufacturing floor, the practitioner may have
to sample a certain number of employees from the day shift and night shift, or to
select employees from line A and line B. How many interviews to conduct likely will
depend on the time available, the problem to be investigated, and the population
from which participants are selected. A practitioner can be more confident that
enough people have been chosen when interviews begin to get repetitive and
substantiate the same common issues. The selection process can be random (using
an established randomization protocol such as a random numbers table or
computerized selection method) or stratified (taking every third person on an ordered
list by employee number, for example). The selection of interviewees also can be
intentionally based on the participants’ knowledge or involvement in the topic being
discussed. (With their greater knowledge of the organization, clients should help in
selecting interviewees.) Still another approach is to conduct what social science
researchers term “snowball” sampling, in which a researcher begins with one or more
participants and concludes each interview by asking the interviewee who else he or
she would recommend interviewing. Thus, the network is tapped for its own
knowledge and access to another interviewee is potentially smoother. In any case,
the practitioner should be prepared to justify these choices, since organizational
members in sensitive situations may attribute meaning to interviewee selection, even
if it was random.
3. Contact participants and schedule interviews. When contacting each potential
interviewee, the interviewer should explain the purpose of the interview and how long
it is expected to take. It can be helpful to have the client or sponsor contact
1164495 - SAGE Publications, Inc. (US) ©
interviewees first to invite their participation, promising contact from the OD
practitioner to schedule the interview. This approach can have the advantage of
easing access and encouraging responsiveness, particularly for an external
consultant, but it can have the disadvantage of associating the consultant with a
client’s goals and objectives. The change agent can be more explicitly seen as an
“agent” of management, and interviewees may be suspicious or withhold important
information if they do not trust the sponsor. In any case, interview participation
should be framed as a free choice with no consequences for refusing to participate,
and potential interviewees should be given the option to participate free from any
coercion. In sensitive situations, practitioners may advise that the client suggest a
list of possible interviewees from which a certain number will be chosen.
Contact methods for scheduling interviews commonly include telephone calls and emails. The more sensitive the topic of the interview, the more the OD practitioner
should consider the most personal method of contact. Simply scheduling interviews
to get feedback on noncontroversial matters such as employee satisfaction with
general working conditions or whether a new process is working correctly may easily
be done by e-mail. Interviews discussing interpersonally sensitive topics such as
conflict with a manager or coworker are best done in person or by telephone.
Descriptions of the purpose of the interview should be consistent between the
client’s description and the practitioner’s, as well as between interviews. Participants
are likely to speak with other colleagues or their managers about the interviews,
sharing the topics and questions among each other. If these descriptions are not
consistent, participants may rightfully be apprehensive about the stated purpose or
intent of the interviews.
Finally, a location for the interview should be selected that allows for the best
interaction possible. This means a private location free from distractions such as
phone calls and personal interruptions. The interview should be conducted in an
environment in which conversation can take place without disturbing or being
overheard by others.
4. Begin the interview and establish rapport. The interviewer should begin with a
personal introduction, again explaining the purpose of both the engagement and the
interview, including what topics will be discussed. The interviewer should also take
the time to explain what he or she will do with the data from the interview and who
will see it (if anyone). The interviewee may want to know how interviewees have
been selected and who else is being interviewed, which the practitioner may be able
to share in a broad categorical sense such as “all managers,” “about half of the
second engineering shift,” or “five people at random from each of the company’s four
sites.” The practitioner can also state what notes will be taken and what will be done
with the notes, in order to explain why the interviewer may be writing during the
interview. Waclawski and Rogelberg (2002) recommend that seating be arranged so
that interviewees can read the interviewer’s notes to further put the interviewee at
ease regarding what is written, and that interviewers alter their notes if there is
anything written that makes the interviewee uncomfortable.
To put the interviewee at ease, it is a good practice to begin the interview with a
relatively safe set of questions about the interviewee’s background, length of time
with the company, and current and previous roles. It can also be useful for an
interview to begin with a “grand tour” question (Spradley, 1979) in which the
interviewer opens up the interview with a broad, general subject, such as “Tell me
about a typical day for you at work” or “Tell me about your involvement with this
group since you joined the company.” While such questions tend to produce
somewhat longer and wandering responses, they can be instructive as an overview
without asking too many questions or introducing bias.
1164495 - SAGE Publications, Inc. (US) ©
Many practitioners make a useful distinction between confidentiality and anonymity
in interviews. Information is confidential if no one other than the consultant will
know what was said in the interview—in other words, what is said stays with the
consultant. Information is anonymous if it can be shared outside the interview but
separated from the source (i.e., the interviewee’s name). Generally interviewers can
promise that information from interviews will remain anonymous but not
confidential. That is, to be able to use the data and act on them, the practitioner
must be able to share the data outside of the interview, but the practitioner should
not share who said what without a participant’s explicit permission (Freedman &
Zackrison, 2001).
5. Conduct the interview by following the interview guide, straying from it when
appropriate. Interviews are primarily a conversation, albeit one with a specific
purpose. Interviewers need to listen carefully to the current response, think about any
follow-up questions, remember the other areas that are listed in the interview guide,
and be conscious that time is limited. The best interviewers can maintain the
character of the interview as a conversation without being distracted by other tasks.
. Close the interview. Close the interview by inviting the participant to pose any
questions that he or she may have. Conclude by thanking the interviewee and
reiterating the timeline for what will happen next and when the participant will hear
results, if at all. Most people are naturally curious about what will happen next, and it
is important that the conclusion of the interview sets the appropriate expectation.
The interviewer can also choose to provide a business card or other contact
information in case additional questions arise or the interviewee wishes to clarify
something after the interview.
Tips for Successful Interviews
1. Listening is a critical skill in interviewing. It is important to avoid interrupting an
interviewee with another question in earnestness to move on to another area.
Listening for emotion as well as content can suggest areas for follow-up questions.
Noticing hesitancy in the interviewee’s voice, the practitioner can ask, “You seem
reluctant to talk about the budgeting process. Can you say more about your
thoughts?”
2. Avoid indicating agreement or disagreement with the interviewee, or suggesting that
the interviewee’s responses are similar to or different from other interviews. Even
head nodding, nonverbal feedback such as “yes” or “uh huh,” used to encourage the
interviewee to continue, can be seen instead as a sign of agreement that may
change the interview. Likewise, Seidman (2006) recommends only rarely sharing
your own experiences that are similar to an interviewee’s, since the interviewer can
become the focus and can potentially alter the direction of the interview. The best
advice is to emphasize interest in the interviewee’s experience, not in one particular
answer.
3. Take notes sparingly during the interview, and write more immediately after it ends. It
is rarely possible to take verbatim notes and participate fully in the conversation.
Taking very brief notes with key words, followed by a more complete record after the
interview, can allow the consultant to pay closer attention to what is being said.
When verbatim quotes are desired, the consultant can ask the interviewee to repeat
what was just said, noting that “what you just said is so important I want to write it
down word for word.” To allow extra time to take notes after interviews, avoid
scheduling them back-to-back. Some consultants choose to audiotape or videotape
interviews, thinking that it will save time taking notes. This practice can actually take
more time, however, since it requires spending time equal to the length of the
interview listening to or watching a tape. Given concerns about where the tapes may
end up, potential technical problems with equipment, and an interviewee’s likely
discomfort, this approach usually presents more disadvantages than advantages. If
additional note-taking is desired to capture a great deal of data, and the consultant
cannot do this alone, another consulting partner could attend the interview only for
1164495 - SAGE Publications, Inc. (US) ©
the purpose of taking notes. This can be very appropriate in some circumstances but
may make some interviewees uncomfortable.
In summary, interviews are probably the most common method of data collection in OD.
Good interviewing requires a consultant to have good interpersonal skills in establishing
rapport, maintaining a conversation, steering the conversation to relevant areas of interest,
and listening actively. While they can be time-intensive, they can be well worth the
additional effort expended to gain knowledge and background detail into the organization.
Because gaining the time to conduct one-on-one interviews can be difficult, many
consultants turn to the group interview, or focus group.
Focus Groups
Focus groups are groups of usually a small number of organizational members facilitated
by a consultant who poses questions and then allows for group discussion. Focus groups
have been used by social scientists as a research tool for many years, and in recent years
they have also been used frequently for public relations and market research activities
(Smithson, 2000). Like interviews, focus groups allow the consultant to explore
experiences and situations in depth and to follow up on specific areas of interest. They are
not observations of group behavior in the group’s ordinary work activities, but a special
conversation instigated by a consultant’s questions and facilitated and directed by the
consultant. Unlike one-on-one interviews, focus groups are rarely appropriate for very
sensitive issues (Waclawski & Rogelberg, 2002). Issues of a general nature, such as how
employees feel about the company’s benefit plan or what they feel should be done to
improve executive communications, can be good candidates for focus groups. In focus
groups, participants can build on one another’s ideas. They can brainstorm, discuss, and
debate. Because they can allow for a wide range of participation and orient members
toward group or team involvement, two key values of OD, many practitioners like to use
focus groups as a method of data gathering. As a disadvantage, focus groups can
generate a tremendous amount of data that can be difficult and time-consuming to
analyze.
To conduct a focus group, consultants should follow a process similar to interviewing.
The considerations are somewhat different, however, since the subject matter and
structure of a focus group are different from individual interviews:
1. Prepare an interview guide. Interview guides for focus groups are likely to be shorter
than those used in one-on-one interviews, since the participation level will be much
greater for a similar amount of time. Consequently, it is likely that fewer subjects will
be covered. In addition to the interview guide, the consultant should prepare some
opening and closing remarks that explain the purpose of the interview, what will be
done with the data, and how the data will be recorded.
2. Select participants. Participants can be selected randomly, as in interviews, or they
can be selected based on some other criteria. Groups can be homogeneous—that is,
selected based on some criteria they share. Examples include intact teams,
managers, all employees in the New York office, salespeople, college interns,
employees with 10 or more years with the company, or customer service personnel.
Groups can also be heterogeneous, or a mixed group, where the group’s composition
is more diverse. The advantage of homogeneous groups is that these employees
may share a similar background because they have something in common, and they
may be able to provide depth and detail into a problem from that perspective.
Customer service personnel, for example, may be able to build on one another’s
contributions to create a more complete picture of quality problems with the
company’s products. By having different backgrounds or roles in the organization,
mixed groups can offer the consultant the advantage of seeing patterns common to
all organizational members regardless of role or demographic characteristic. The
question of whether to use homogeneous or heterogeneous groups thus depends on
1164495 - SAGE Publications, Inc. (US) ©
the focus group’s purpose. Waclawski and Rogelberg (2002) find an advantage in
having each group be homogeneous, but recommend conducting enough focus
groups so that a heterogeneous population participates in the overall data collection.
As with interviews, a location should be selected that is free from distractions and
interruptions for the length of the discussion. It is best for the location to contain a
large oval table or for chairs to be arranged in such a way that all participants can
see one another (Waclawski & Rogelberg, 2002). The number of participants per
group depends on the subject’s complexity but should be somewhere from 5 to 15
people. Successful focus groups can be conducted with more participants, but since
time is likely limited, participants may struggle to contribute if the group numbers
more than 20. Again, like with interviews, invitations to participants should explain
the purpose and structure of the focus group, and participants should be reminded
that attendance is voluntary and there should be no consequences for
nonparticipation. There should be no peer pressure to participate (or not) and it is
best that consultants minimize the potential for organizational members to know
who did and who did not participate. For this reason, it is wise to issue invitations
not to a large group at once but directly to each individual invited to participate.
Thus, those who decline will remain known only to the consultant.
3. Hold the focus group. The consultant or focus group facilitator should begin by
welcoming participants, and reiterating the purpose and explaining the structure of
the focus group. Participants should introduce themselves if they do not know one
another already, in order to understand one another’s organizational roles and
backgrounds. The facilitator should explain what and how notes will be taken, as
well as what will happen with the results of the focus group(s). The facilitator should
next propose several ground rules for participation, including the following:
What is said during the course of the meeting should remain known only to the
participants and should not be repeated to others.
Roughly equal participation is the goal, so the facilitator is likely to intervene
from time to time to ensure that all voices are heard.
The facilitator may intervene to keep the group on track or focus conversation
back to the question under discussion.
The purpose is to explore issues; personal comments or attacks are not
appropriate.
The group may wish to agree on other ground rules as well, and the distinction
between confidentiality and anonymity should be made at this point.
The facilitator can begin with an open-ended, safe question to encourage
participation if the group seems hesitant to begin. So that each person’s voice can be
heard and any initial hesitation to contribute can be lessened, the facilitator may
wish to begin with a directive question such as “What one word comes to mind when
you think of the physical environment of the building?” or “In one sentence, what’s
the best thing about your job?” It is not necessary to go around the circle in order (a
conversation is the objective), but that is an option for groups that are especially
quiet.
As the facilitator follows the interview guide, it may be necessary to quiet any
monopolizing participants and to encourage those who are shy or have a difficult
time jumping into the flow of conversation. Making eye contact with those who look
as though they are waiting to participate is a very subtle way of encouraging
participation, but more directive comments may be necessary, such as “Ray has
been trying to say something” or “Just a minute, Rick. I think Shanthi has something
to add.” Frequent contributions from one member can bias the facilitator’s
perception and give the impression that the entire group shares a single view. It may
be important to test this with the group from time to time if the group does not
comment. This can be done by saying simply, “Sheila has offered her suggestion for
what should be done about the procurement process. Does everyone agree with
that?” or “Who can offer another idea?”
1164495 - SAGE Publications, Inc. (US) ©
Because the focus group often results in individuals sharing a group experience,
some participants may be reluctant to offer a different view, especially if members
have a close relationship outside of the focus group. Such groupthink should be
tested by the facilitator as well. When participants know one another outside of the
focus group, the facilitator should be reminded that the members have a previous
history and other organizational roles that will change the course of the
conversation. Members may work together frequently, they may have had a
contentious relationship in the past, or they may have unequal status. Current or
past conflicts, tight coworker relationships, or strong team identification likely will be
manifested in the focus group setting. (Recall the OD value of the “whole person.”
People are greater than their organizational roles in the focus group.) To the extent
possible, the facilitator should try to be aware of these dynamics to better
understand what is being said and why.
The facilitator must also balance the need to get through the interview guide and to
explore topics of interest. Since the goal is an in-depth exploration of issues, the
facilitator will need to listen carefully to the various contributions and offer
additional probing questions, such as “I’ve heard a theme from many people saying
that they don’t seem to trust the management team. Tell me what they could do to
change that.”
Like interviews, focus groups are conversations among several participants. Careful
listening and skilled conversational and facilitation practices are central to...
Purchase answer to see full
attachment