Organizational Research
Methods
http://orm.sagepub.com
Introduction: Understanding and Dealing With Organizational Survey Nonresponse
Steven G. Rogelberg and Jeffrey M. Stanton
Organizational Research Methods 2007; 10; 195
DOI: 10.1177/1094428106294693
The online version of this article can be found at:
http://orm.sagepub.com/cgi/content/abstract/10/2/195
Published by:
http://www.sagepublications.com
On behalf of:
The Research Methods Division of The Academy of Management
Additional services and information for Organizational Research Methods can be found at:
Email Alerts: http://orm.sagepub.com/cgi/alerts
Subscriptions: http://orm.sagepub.com/subscriptions
Reprints: http://www.sagepub.com/journalsReprints.nav
Permissions: http://www.sagepub.com/journalsPermissions.nav
Citations (this article cites 24 articles hosted on the
SAGE Journals Online and HighWire Press platforms):
http://orm.sagepub.com/cgi/content/abstract/10/2/195#BIBL
Downloaded from http://orm.sagepub.com by on May 9, 2007
© 2007 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
Introduction
Understanding and Dealing
With Organizational Survey Nonresponse
Organizational
Research Methods
Volume 10 Number 2
April 2007 195-209
© 2007 Sage Publications
10.1177/1094428106294693
http://orm.sagepub.com
hosted at
http://online.sagepub.com
Steven G. Rogelberg
University of North Carolina at Charlotte
Jeffrey M. Stanton
Syracuse University
A survey is a potentially powerful assessment, monitoring, and evaluation tool available to organizational scientists. To be effective, however, individuals must complete the survey and in the
inevitable case of nonresponse, we must understand if our results exhibit bias. In this article, the
nonresponse bias impact assessment strategy (N-BIAS) is proposed. The N-BIAS approach is a
series of techniques that when used in combination, provide evidence about a study’s susceptibility to bias and its external validity. The N-BIAS techniques stem from a review of extant
research and theory. To inform future revisions of the N-BIAS approach, a future research
agenda for advancing the study of survey response and nonresponse is provided.
Keywords:
surveys; nonresponse; survey response; bias; response rates
F
ield surveys can provide rich information to researchers interested in the human situation as it exists in vivo. In the context of organizational research, surveys can effectively
and efficiently assess stakeholder (employees, management, students, clients) perceptions
and attitudes for a variety of purposes. According to Kraut (1996), survey purposes include
the pinpointing of organizational concerns, observing long-term trends, monitoring program impact, providing input for future decisions, adding a communication channel, performing organizational behavior research, assisting in organizational change and improvement,
and providing symbolic communication. Because the value of a survey in addressing these
purposes is dependent on individuals participating in the research effort, low response rates
are a perennial concern among researchers and others who conduct, analyze, interpret, and
act on survey results.
Low response rates can cause smaller data samples. Smaller data samples decrease statistical power, increase the size of confidence intervals around sample statistics, and may
limit the types of statistical techniques that can effectively be applied to the collected data.
A low response rate can also serve to undermine the perceived credibility of the collected
data in the eyes of key stakeholders (Luong & Rogelberg, 1998). Most important, low
response rates can undermine the actual generalizability of the collected data because of
Authors’ Note: Both authors contributed equally to this article. Correspondence concerning this article should
be addressed to Steven Rogelberg, Organizational Science/Department of Psychology, University of North
Carolina at Charlotte, Charlotte, NC 28223-0001; e-mail: sgrogelb@email.uncc.edu.
195
Downloaded from http://orm.sagepub.com by on May 9, 2007
© 2007 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
196
Organizational Research Methods
nonresponse bias. Where nonresponse bias exists, survey results can produce misleading
conclusions that do not generalize to the entire population (Rogelberg & Luong, 1998).
Research on issues pertaining to nonresponse has a long history; the research continues
today as evidenced by a quick look at the large number of recent publications and books referenced in the articles in this feature topic. F. Stanton (1939) wrote one of the first empirical pieces on the topic in the Journal of Applied Psychology titled, “Notes on the Validity of
Mail Questionnaire Returns.” This was quickly followed up by Suchman and McCandless’s
(1940) Journal of Applied Psychology article titled, “Who Answers Questionnaires?”
W. Edwards Deming (1953), the father of statistical process control and total quality management, is sometimes reckoned as the first to develop a framework for understanding the
impacts and trade-offs of nonresponse, although public opinion research pioneer Helen
Crossley published scholarly work on nonresponse that predated Deming’s (e.g., Crossley &
Fink, 1951). In the wake of the 1948 U.S. presidential election–when opinion poll predictions favoring Dewey over Truman proved spectacularly wrong–survey researchers began to
focus systematically on threats to the validity of their conclusions. The realization of how
badly nonresponse bias could affect the interpretation of survey results was a key driver
behind the development of a wide variety of techniques to enhance response rates.
Setting aside for the moment the extensive literature on techniques to increase response
rates, examinations of nonresponse tend to fall into two primary categories, one focusing
on the detection and estimation of the extent of nonresponse bias and the other focusing on
statistical methods of compensating for nonresponse (e.g., through imputation of missing
data). Two of the most influential scholarly works in the area of nonresponse exemplify
these two streams of research. Armstrong and Overton (1977) published a widely cited set
of strategies for estimating nonresponse bias in survey studies, whereas Rubin (1987)
developed a book-length treatment of methods for imputing data in sample surveys. Closer
to home in the organizational literature, a further distinction is made based on level of
analysis, with researchers such as Tomaskovic-Devey, Leiter, and Thompson (1994) focusing on nonresponse by organizational representatives when the sampled unit is organizations, whereas others such as Rogelberg, Luong, Sederburg, and Cristol (2000) examined
nonresponse by employees within organizations.
In this article, we take a brief look backward by examining organizational survey nonresponse and identifying key issues and implications related to response rates and nonresponse
bias. Given existing knowledge, we ask and answer the question of what a survey researcher
can do to test for and examine whether obtained data are bias susceptible. Next, we look forward and propose a research agenda for advancing the study of survey nonresponse.
Elusive “Acceptable” Response Rate
Based on extant literature (Dillman, 2000; Fowler, 2002; Fox, Crask, & Kim, 1988;
Heberlein & Baumgartner, 1978; James & Bolstein, 1990; Yammarino, Skinner, & Childers,
1991; Yu & Cooper, 1983), well-known response facilitation techniques are presented in Table
1. More important, though, even researchers who have the time and resources to follow all
of these recommendations will rarely achieve 100% response. In addition, the problem of
nonresponse appears to be worsening over time. For example, Baruch (1999) reported that in
1975, the typical survey response rate for studies in the top organizational research journals
Downloaded from http://orm.sagepub.com by on May 9, 2007
© 2007 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
Rogelberg, Stanton / Nonresponse Impact Analysis
197
Table 1
Response Facilitation Approaches
Facilitation Technique
Summary
Prenotify participants
Prepare potential participants for the survey process by personally notifying
them that they will be receiving a survey in the near future.
Actively publicize the survey to respondents (e.g., posters, e-mails). Inform
survey respondents about the purpose of the survey and how survey results
will be used (e.g., action planning).
Consider the physical design of your survey: Is it pleasing to the eye? Easy
to read? Uncluttered? Are questions evenly spaced?
Provide upfront incentives to respondents, where appropriate. Inexpensive
items such as pens, key chains, magnets, or certificates for free food/drink
have been shown to increase response rates.
Use a theory-driven approach to survey design, which will help determine
critical areas that should be addressed within the survey instrument as
opposed to including too much content.
Send reminder notes to potential respondents beginning 3 to 7 days after
survey distribution.
Ensure that everyone is given the opportunity to participate in the survey
process (e.g., provide paper surveys where employees do not have access to
computers, schedule time off the phone for employees in call centers, have
survey run for sufficient time so that vacation time does not impede
response).
Monitor response rates so that HR generalists and/or the survey coordinators
can identify departments with low response rates. Provide feedback and
consider fostering friendly competition between units.
An understanding of the importance of their opinions and participation will
help increase the likelihood of survey completion.
When applicable, involve a wide range of employees (across many levels) in
the survey development process.
Once the survey data are collected, survey feedback should be provided.
Rather than influencing the present survey, this approach influences future
survey efforts by positive use of the survey results.
Publicize the survey
Design carefully
Provide incentives
Manage survey length
Use reminder notes
Provide response
opportunities
Monitor survey response
Establish survey importance
Foster survey commitment
Provide survey feedback
(e.g., Journal of Applied Psychology, Academy of Management Journal) was 64.4%. In
1995, however, the typical survey response rate reported in these same journals dropped to
approximately 50%. Neither the use of Web-based survey techniques nor other emerging
data collection technologies (e.g., surveys on handheld devices) has proved a panacea for
increasing response rates (Cook, Heath, & Thompson, 2000). Furthermore, given the popularity of opinion polls, the emergence of hundreds of online survey outfits, and the
predilection of organizational managers for insight into the attitudes and beliefs of constituents and stakeholders, oversurveying has worsened the situation (Weiner & Dalessio,
2006). Perhaps as a result of these continuing declines, the bar for “acceptable” response
rates is also quite low.
Many research reports containing survey results often justify an obtained response rate on
the basis of how it is consistent with industry standards or what is typically found in a given
area of research. Although such descriptions do put a response rate into context, the fact
that everyone else also achieves 30%, 40%, or 50% response does not help to demonstrate
Downloaded from http://orm.sagepub.com by on May 9, 2007
© 2007 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
198
Organizational Research Methods
that the reported research is free from nonresponse bias. If the standard response rate in
one’s research area is 45% and a new article in the same area achieves 55%, we suggest that
using this achievement to claim superior results is folly because the difference between the
new study’s response rate and the standard contains little if any information about the presence, magnitude, or direction of nonresponse bias present in the data.
In contrast, if a study does obtain a response rate well below some industry or area standard, this also does not automatically signify that the data obtained from the research were
biased. Thus, researchers who suppress or minimize the importance of results on the basis
of a low response rate have also done a disservice to their audience, by failing to analyze
whether their low response rate truly had a substantive impact on conclusions drawn from
the data. In the absence of good information about presence, magnitude, and direction of
nonresponse bias, ignoring the results of a study with a 10% response rate—particularly if
the research question explores a new and previously unaddressed issue—is just as foolish
as assuming that one with a response rate of 80% is unassailable. In the following section,
we dissect nonresponse bias in greater detail to document exactly why this is so.
The Nature of Nonresponse Bias
Nonresponse bias is traditionally operationalized with the following heuristic formula:
–
–
Nonresponse bias = PNR (XRes − XPop) where PNR refers to the proportion of nonrespondents,
–
–
XRes is the respondent mean on a survey relevant variable, and XPop is the population mean on
the corresponding survey relevant variable, if it were actually known (Rogelberg & Luong,
1998). More important, in place of the mean shown in the nonresponse bias equation, other
descriptive indices (e.g., a correlation coefficient) along with their corresponding population
parameters can also be used. Examining this heuristic formula in detail, the possible range
of bias depends on both the response rate–which bounds the extent to which the sample may
be biased–and the distinctiveness of nonrespondents. The following two scenarios adapted
from Fowler (2002) illustrate this point.
• Suppose a population of 100 is surveyed, and 90 respond (response rate of 90%). Of those 90,
45 say yes to some question; the other 45 say no. There are 10 people (the nonrespondents)
whose views we do not know. If these nonrespondents would have responded with a yes, the
true figure for the population would be 55% yes. If they would have responded with a no, the
true population rate would be 45% yes. Regardless of the 90% response rate, the range is quite
large, and it spans a region where either yes or no could have been the majority vote.
• Suppose a population of 100 is surveyed, and 10 respond (response rate of 10%). Of those 10,
half say yes to some question; the other half say no. There are 90 people (the nonrespondents)
whose views we do not know. If half of these nonrespondents had responded with a yes, the
true figure for the population would be 50% yes—identical to what the sample results showed.
Nonresponse Bias Impact Assessment Strategy
Examining the heuristic formula and extreme scenarios above, it becomes evident that response
rate alone is an inaccurate and unreliable proxy for study quality. We do not advocate that
Downloaded from http://orm.sagepub.com by on May 9, 2007
© 2007 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
Rogelberg, Stanton / Nonresponse Impact Analysis
199
Table 2
N-BIAS Techniques
Technique
Archival analysis
Follow-up approach
Wave analysis
Passive nonresponse
analysis
Interest-level analysis
Active nonresponse
analysis
Worst-case resistance
Benchmarking analysis
Demonstrate
generalizablity
Overview
Quality Rating
Compare respondents to nonrespondents on variables contained
in an archival database
Resurvey nonrespondents
Compare late respondents to early respondents
Examine the relationship between passive nonresponse
characteristics and standing on the key survey topics
being assessed
Assess the relationship between interest in the survey
topic in question and standing on the key survey topics
being assessed
Assess percentage of purposeful, intentional, and a priori
nonresponse using interviews
Use simulated data to determine robustness of observed
findings and relationships
Use measures with known measurement properties and
normative data so that observed data can be cross-referenced
Replicate findings use a different set of research methods
2
2
1
3
3
2
2
1
4
Note: Quality (1 = lower quality to 4 = higher quality) was assessed qualitatively by the authors of the
article based on the conclusiveness of the evidence provided. N-BIAS = nonresponse bias impact assessment
strategy.
improving response rates is a quixotic goal but rather that researchers’ major efforts and
resources should go into understanding the magnitude and direction of bias caused by nonresponse. We advocate that researchers should conduct a nonresponse bias impact assessment,
regardless of how high a response rate is achieved. In the material below, we provide a conceptual outline of a nonresponse bias impact assessment strategy (N-BIAS). Overall, we view
this process as similar to a test validation strategy (Rogelberg & Luong, 1998). In amassing
evidence for validity, each of several different validation methods (e.g., concurrent validity)
provides a variety of insights into validity. Each approach has strengths and limitations. There
is no one conclusive approach and no particular piece of evidence that is sufficient to ward
off all threats. Likewise, assessing the impact of nonresponse bias requires development and
inclusion of different types of evidence, and the case for nugatory impact of nonresponse bias
is built on multiple pieces of evidence that converge with one another.
N-BIAS Methods
N-BIAS is presently composed of nine techniques (see Table 2). The first three techniques are only briefly reviewed as many researchers may already be familiar with them
(for more information, see Rogelberg & Luong, 1998). The remaining six techniques are
newer approaches that stem from recent research findings.
Downloaded from http://orm.sagepub.com by on May 9, 2007
© 2007 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
200
Organizational Research Methods
Technique 1: Archival analysis. The researcher identifies an archival database that contains the members of the whole survey sample (e.g., personnel records, school records, and
membership information). After data collection, code numbers on the returned surveys (or
access passwords) can be used to partition the information in the archival database into two
segments: (a) data concerning respondents and (b) data concerning nonrespondents.
Although some observed differences might exist, it is important to understand that these
differences between respondents and nonrespondents (or the population in general) do not
necessarily indicate response bias. Bias exists only when the observed archival differences
are also systematically related to responses on the survey topic(s). For example, different
response rates by gender are a concern only if gender is related to actual responses to the
survey topic(s). To emulate the archival approach without the use of code numbers, a
researcher can include specific survey questions to elicit information also found in the
archival data set (e.g., gender). This technique allows for profile comparisons between
the observed sample and the archival information gleaned for the entire initial sample (or
population).
Technique 2: Follow-up approach. Using identifiers attached to returned surveys (or
access passwords), respondents can be identified and by extension nonrespondents. The
follow-up approach involves randomly selecting and resurveying a small segment of nonrespondents (typically an abridged survey is used) either by phone, mail, or e-mail. The collection of data from the follow-up sample, if generally complete (which may be unlikely),
allows for meaningful comparisons between respondents and nonrespondents on actual survey topic variables. It is important to note, however, that if the special efforts to obtain the
follow-up sample in any way influence responses, differences observed may be due to
method effects rather than to true characteristics of nonrespondents.
Technique 3: Wave analysis. By noting in the data set whether each survey was returned
before the deadline, after an initial reminder note, after the deadline, and so on, responses
from pre-deadline surveys can be compared with the late responders on actual survey variables (e.g., compare job satisfaction levels). If late respondents differ from respondents, it
most likely suggests that some level of bias exists. However, given that late nonrespondents
are not “pure” nonrespondents in that they obviously did complete the survey, being similar to respondents does not conclusively indicate an absence of bias.
Technique 4: Passive nonresponse analysis. Rogelberg et al. (2003) found that the vast
majority of nonresponse can be classified as being passive in nature. Passive nonresponse
does not appear to be planned. Instead, passive nonrespondents may not have actually
received the survey (unbeknownst to the researcher), might have forgotten about it, mislaid
it, were ill, or just did not get around to doing it given other commitments (e.g., Peiperl &
Baruch, 1997). Considering that the individuals who make up the passive nonresponse group
seem, as a general rule, willing to participate in filling out the survey, it is not surprising that
they generally do not differ from respondents with regard to job satisfaction or related variables. Thus, for most survey instances, bias is not created by passive nonrespondents. The
only plausible instance in which passive nonresponse could create bias is where the survey
assesses constructs that are indeed related to the reasons that passive nonrespondents fail to
Downloaded from http://orm.sagepub.com by on May 9, 2007
© 2007 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
Rogelberg, Stanton / Nonresponse Impact Analysis
201
return the survey. For example, passive nonresponse could present a problem when the
survey topic in question is related to workload, busyness, or excess demands.
To conduct a passive nonresponse analysis, one should include questions on the survey
that tap into factors related to passive nonresponse. Then during the analysis of the data,
one can examine the extent to which these factors are related to standing on the focal topic.
For example, if busyness is related to standing on the survey topic, response bias would be
quite probable. If a relationship is detected, it may be possible to compensate by using the
variable as a control variable. If a quantitative assessment of the relationship between these
passive nonresponse factors and the content of the survey can not be undertaken, an analysis based on topically relevant theory and literature should still occur. Does theory suggest
that these factors relate to the constructs in questions? Have a team of subject matter
experts reflect and make judgments on this question based on their practical experience
with particular variables and populations. Armstrong and Overton (1977) reported that a
consensus among three judges resulted in only 4% of items incorrectly classified with
respect to the direction of nonresponse bias.
Technique 5: Interest-level analysis. Researchers have repeatedly identified that interest
level in the survey topic is related to a respondent’s likelihood of completing the survey
(e.g., Groves, Presser, & Dipko, 2004). As a result, if interest level is related to attitudinal
standing on the topics making up the survey, the survey results are susceptible to bias. To
conduct an interest-level analysis, researchers should include a few items that examine
respondents’ interest toward the particular topic. Subsequently, assuming an absence of
range restriction, the researcher can investigate bias by assessing the relationship between
responses to these items and responses to the actual survey topic(s). If significant relationships appear, bias most likely exists in the respondent sample. It may be possible to compensate for this bias by using interest level as a control variable–again assuming that there
is enough variation in interest among the respondents to support this analysis.
Technique 6: Active nonresponse analysis. Active nonrespondents, in contrast to passive
nonrespondents, are those who overtly choose not to respond to a survey effort. The nonresponse is volitional and a priori (i.e., it occurs when initially confronted with a survey
solicitation). Active nonrespondents tend to differ from respondents on a number of dimensions typically relevant to the organizational survey researcher (e.g., job satisfaction, intentions to quit). Fortunately, the ability of this active nonresponse group to introduce bias is
limited by their diminutive size. If the number of active nonrespondents to a survey effort
is large, however, the potential for bias increases substantively. Survey researchers should
try to estimate the magnitude of anticipated active nonresponse to proposed survey effort
(see Rogelberg et al., 2003). This task can be accomplished at the early stages of the
research by conducting interviews or focus groups with a random group of population
members aimed at estimating the extent of anticipated and volitional nonresponse (e.g., ask
employees their intentions to the specific survey situation in question). If the results show
that the active nonrespondent group comprises a low proportion of the population, fewer
concerns for bias arise. If the proportion of active respondents is greater than 15% of the
group of individuals included in the interviews or focus groups (this has been the average
rate in other studies), generalizability may be compromised.
Downloaded from http://orm.sagepub.com by on May 9, 2007
© 2007 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
202
Organizational Research Methods
Technique 7: Worst-case resistance. Given the data collected from study respondents in
an actual study, one can empirically answer the question of what proportion of nonrespondents would have to exhibit the opposite pattern of responding to adversely influence sample results. By adding simulated data to an existing data set, one can explore how resistant
a data set is to worst-case responses from nonrespondents. Taking a simple mean comparison as an example, let’s say that women scored more highly than men on a particular scale.
Using simulated data, one could add a distribution of low-valued responses to the group of
women and/or high-valued responses to the group of men to estimate the proportion of nonrespondents that would be needed to force researchers to draw the opposite conclusion
from the data (i.e., men score higher than women). Using a realistic distribution of simulated data (i.e., not just a collection of responses at the most extreme scale value), it may
become evident from this exercise that an absurdly large proportion of the nonrespondent
pool would have to exhibit the opposite pattern of results to make substantive change to the
conclusions drawn from the actual data.
Technique 8: Benchmarking analysis. Many constructs of organizational interest–such as
organizational commitment–are measured using established scales that have national
norms. These norms provide mean scale levels for a variety of subgroups. Other evidence,
such as validity studies pertaining to these scales, may also contain other distributional
information such as standard deviation, skewness, and so forth. Subsequent studies that use
these scales should replicate those distributional characteristics, within the limits of statistical error. Therefore, assuming one has scale norms available that are appropriate to the
characteristics of a sample, demonstrating similarity between one’s sample results and a set
of norms is an additional argument for an absence of nonresponse bias. This capability also
demonstrates an important benefit to using established scales for organizational research.
Technique 9: Demonstrate generalizability. By definition, nonresponse bias is a phenomenon that is peculiar to a given sample under particular study conditions. Triangulating
with a sample collected using a different method or varying the conditions under which the
study is conducted should also have effects on the composition of the nonrespondents
group. For these reasons, replicating a set of findings across multiple data samples is
another compelling method of demonstrating an absence of substantive nonresponse bias.
In many of the N-BIAS techniques described above, one is amassing evidence for the
absence of nonresponse bias by making comparisons and tests that preferably lead to nonsignificant statistical results. But the probability of failing to find a statistically significant
effect when one is, in fact, present–typically referred to as beta in discussions of the null
hypothesis significance test–is directly related to statistical power and closely connected to
sample size. Thus, a convincing body of evidence against nonresponse bias depends on
having substantial sample sizes to conduct the N-BIAS analyses.
Future Research Agenda
The N-BIAS approach was constructed on the basis of our current understandings of
nonresponse to organizational surveys. It should most appropriately be labeled N-BIAS 1.0
Downloaded from http://orm.sagepub.com by on May 9, 2007
© 2007 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
Rogelberg, Stanton / Nonresponse Impact Analysis
203
in that future advancements should lead to amendments and improvements. As our knowledge
base increases, so should our ability to more accurately assess whether our data are representative and without substantive bias. In this spirit, we propose what we see are the most pressing research needs below. They appear in two parts: (a) study of respondents/nonrespondents
and (b) study of response rates/facilitation.
Study of Respondents/Nonrespondents
Organizational context. An organizational survey differs from political polling/consumer
survey types in several subtle ways (Rogelberg, 2006). For example, potential respondents
to organizational surveys usually have a relatively close connection to the survey sponsor
(e.g., their organization’s management). In addition, with organizational surveys, respondents have beliefs about the track record of inaction or action with respect to past organizational surveys. Potential respondents may also perceive greater risk associated with
completing an organizational survey as opposed to a polling or consumer survey (e.g., possible repercussions). Future work needs to examine whether research in one survey domain
can effectively generalize to the other domain—can we safely extend all we know about
public polling and marketing surveys to employee surveys? Are there particular dimensions
of the survey context that limit generalizability? On a related note, future work should
attempt to more systematically examine the impact of situational variables (as close as possible to the date of survey administration) on survey response. For example, we need
research on how current workload, unexpected assignments, seasonal differences in business activity, or related factors enhance or reduce individuals’ likelihood to respond to organizational surveys.
The active/passive model. Research should continue to examine active nonrespondents
and passive nonrespondents. Research to date has divided active and passive respondents
on the basis of a behavioral-intentions measure that must be administered to a complete
respondent pool. If we knew more about these different groups and how to predict the composition of their membership, the necessary processes of quantifying the extent and causes
of active nonresponse could be simplified. Are there typical rates of active and passive nonresponse to different types of survey efforts? How stable is active nonresponse? Can an
active nonrespondent be “won” over? Can active or passive nonresponse groups be further
broken down into subtypes? For example, evidence suggests that for some passive nonrespondents, dispositional factors (e.g., low conscientiousness) explain their failure to
respond, whereas for others, situational factors are more influential (e.g., lost the survey;
competing priorities). Likewise, it may be possible to divide active nonrespondents on the
basis of the nature and origins of their concerns–for example, an objection to being oversurveyed versus substantive concerns with the organizational environment.
Predictors and covariates. A range of additional predictors of nonresponse/response are
worthy of examination, such as specific personality traits (e.g., prosocial personality, reciprocation wariness), affective states, and organizational cynicism. As one of our N-BIAS
techniques, we suggested measurement of interest level in the survey topic as a bias assessment
Downloaded from http://orm.sagepub.com by on May 9, 2007
© 2007 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
204
Organizational Research Methods
and control variable, but no standard measure of interest in a survey topic exists. A normed
measure of survey interest would help researchers understand the impact of this variable on
their survey results. More generally speaking, additional work on organizational survey
response norms should be conducted. Research shows that prevailing social norms affect
participation in marketing and political surveys (e.g., Bosnjak, Tuten, & Wittman, 2005;
Groves, Singer, & Corning, 2000). It is possible that organizations also have prevailing
social norms related to culture and climate that influence whether responding to surveys is
seen as a desirable citizenship behavior.
Multi-level issues. When examined at an aggregate level, nonresponse data may shed
light on organizational variables of concern. For example, if certain teams, units, or departments have lower response rates than other departments, this result may be indicative of
some important underlying differences in job satisfaction or organizational commitment. At
present, relatively little of the nonresponse literature has examined any level of analysis
between the individual and the organization. It is likely, however, that team, unit, and
department-level issues (e.g., team cohesion) affect response rate and that, in turn, nonresponse bias may affect results of multi-level analyses conducted at these differing levels
of aggregation. Future research should use multiple levels of analysis when examining
response rates. Researchers should try to understand and predict unit-level response rates
by identifying variables that relate to unit-level response.
At a larger scale, one multi-level issue of great concern to multinational organizations is
the effect of culture on survey response behavior. Although examples of cross-cultural
research on survey response do exist, many of these studies are conducted by creating models at the individual level of analysis or by comparing group means. The use of multi-level
techniques to understand the effect of culture as a context surrounding organizational survey response might provide useful insights to those study administrators charged with
obtaining culturally diverse samples.
A positive organizational behavior approach. In most research on nonresponse, the lion’s
share of attention is focused on nonrespondents, why they fail to respond, and how best to
convert them to respondents. The motivations and barriers associated with nonresponse are
the primary focus of the research. In the spirit of positive organizational behavior, we need
more research on what motivates respondents to actually respond to organizational surveys
and how they expect their results to accomplish positive change in the organization. This is
particularly noteworthy in that motivated respondents often have better item completion rates
and respond to the survey with greater effort and care (Rogelberg, Fisher, Maynard, Hakel, &
Horvath, 2001). Relatedly, it would be worthwhile to examine how past survey experiences
and behavior predicts future survey response behavior. Organizations provide a survey context that is unlike marketing and polling applications because employees are generally
expected to respond to a series of surveys over a period of years. Are there serial respondents
and nonrespondents? Almost no research has examined “within-person” response rate issues,
but the organizational context is the perfect setting for conducting this variety of research.
Survey length and oversurveying. Usable techniques for reducing scale length and survey size have existed for several years now (see Stanton, Sinar, Balzer, & Smith, 2002), yet
Downloaded from http://orm.sagepub.com by on May 9, 2007
© 2007 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
Rogelberg, Stanton / Nonresponse Impact Analysis
205
the typical scale length on organizational surveys does not appear to have diminished as a
result. Long organizational surveys that are frequently administered inevitably lead to a
feeling among employees that they are tired of taking surveys, and this in turn reduces
response rate. We need additional research on methods to prevent oversurveying both by
reducing the average survey length and by reducing the frequency with which organizational surveys are administered. In addition, we need evaluation research to assess the
extent to which these techniques are useful for enhancing response rates. On a related note,
in the presence of low response rates within their organizations, survey administrators often
undertake extensive efforts to facilitate response (e.g., multiple reminders). Paradoxically,
these efforts may lead to a stronger feeling of being oversurveyed on the part of employees
and thus may inadvertently diminish future response rates. Whether or not this is true and
the extent of this effect is another worthwhile empirical question.
Innovative Internet techniques. A common observation in the popular press is that the
“current generation” is more used to playing video games than reading books. Regardless
of the extent to which this is true, and whether or not it has a future impact on business
communication, it is certainly the case that the Internet provides substantial new opportunities for researchers to collect their data using methods quite different from the traditional
paper-and-pencil Likert-type scale (see Rogelberg, Church, Waclawski, & Stanton, 2002;
Stanton & Rogelberg, 2001). Efforts such as The StudyResponse Project have shown that
the Internet can be used advantageously to help researchers understand how nonresponse
may be affecting their research. The Internet also provides opportunities for simulations,
role-playing, participant observation, and other data collection methods that may help to
counteract survey fatigue and research challenges related to nonresponse.
In addition to the research agenda described above, organizational science as a field
would benefit from three larger initiatives. First, our principal journals should develop editorial statements addressing nonresponse and expectations on how the authors should
address it in their research. Second, the field would benefit from a sincere effort to address
the oversurveying phenomenon in organizations. This is a particularly relevant concern
with the growing popularity of Internet surveys and the ease with which the Internet and
other related technologies allow for virtually instantaneous contact with employees and the
public in general. Preventing potential research participants from being oversolicited with
research requests begins with educating our students and clients about the merits of alternative research methodologies (e.g., focus groups) and on the various circumstances when
other research techniques are preferable to surveys. Together with this education, we advocate the development of organizational survey registries designed to monitor survey use,
carefully plan survey efforts, and prevent poor surveys from being conducted.
Finally, we advocate the creation of a survey respondent “bill of rights” endorsed by our
major professional societies. The bill of rights would suggest that researchers need to treat
potential survey respondents as a finite and depletable resource: a resource that needs to be
respected, protected, and nurtured. The bill of rights would detail the social responsibilities
researchers have to research participants. This list should go beyond what is required by
ethics committees and should address initiatives that promote a sense of goodwill toward
our research. For example, participants should be entitled to feedback concerning their participation in our research (e.g., summary of results). In addition, participants should not be
Downloaded from http://orm.sagepub.com by on May 9, 2007
© 2007 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
206
Organizational Research Methods
subject to unreasonably extensive reminders or invitations to participate in research projects.
A constant barrage of e-mail solicitations or reminders to participate in research seems the
surest way to anger potential research participants and discourage their participation. In
addition to its ethical purpose, participant debriefing should be informative and educational. Finally, particularly in applied settings, research participants should be informed
regarding what actions (at the very least, explain inaction) will come from the research data
collected.
The Feature Topic Issue
We served as action editors on a set of submissions to Organizational Research Methods
that the journal solicited in 2005 on the topic of nonresponse to organizational surveys. The
solicitation invited “conceptual (i.e., new theory) and literature review papers,” as well as
“papers offering guidelines and best practices that are based on solid empirical work published previously (these would be useful for people who are planning on conducting a survey.” We received more than 25 submissions for this feature topic, and all papers were
subjected to the journal’s rigorous double-blind review process. With the vital assistance of
editorial assistant Barbara Stephens; the journal’s editor, Herman Aguinis; and a cadre of
dedicated reviewers, this review process led to the acceptance of five very strong papers
that have begun to address the research agenda described above.
Rose, Sidle, and Griffith (in press) have started to address our call for positive approaches
to motivating respondents with their innovative field experiments on the size and presentation
of token monetary rewards. The use of token rewards presented at the time of solicitation
has served as a staple of public opinion polling for years and has generally demonstrated
substantial positive effects on response rate. The technique has not seen extensive use in the
context of organizations, however, perhaps because organizational researchers see their
research participants as a captive audience that already receives material rewards from the
organization. Nonetheless, Rose, Sidle, and Griffith have shown that token monetary incentives have a beneficial effect within the organizational context, and they have laid the
groundwork for a very productive line of future research on this topic.
Lyness and Kropf (in press) have responded to our call for multilevel examinations of
nonresponse with their examination of cross-national differences in response rates. The
substantial differences in response rates from organizational units in different national contexts should provide a wake-up call to any researchers who hope that a one-size-fits-all
approach to improving response rates can work when deployed internationally. Their study also
provides intriguing evidence concerning the impact of interest in survey topic on response
rates. Given the increasingly global nature of the business climate for many companies, an
awareness of how national and cross-cultural influences may affect organizational survey
response is particularly apropos.
Allen, Stanley, Williams, and Ross (in press) have provided sobering evidence about the
impact of nonresponse on the results of studies that involve measuring workgroup diversity.
This article also responds to our call for examining nonresponse from a multi-level point of
view. The article reports a set of simulations that demonstrate the powerful effects that nonresponse may have on the outcomes of group research. These researchers’ conclusions also
amply demonstrate our warning concerning the danger of naming a particular response rate as
Downloaded from http://orm.sagepub.com by on May 9, 2007
© 2007 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
Rogelberg, Stanton / Nonresponse Impact Analysis
207
“acceptable.” In their study, response rates in excess of 60%–a level of response of which many
researchers would be envious–still led to substantial distortions in observed correlations.
Thompson and Surface (in press) examined how employees feel about taking surveys
online, whether dissatisfaction with Web-based survey media discourages response, and the
representativeness of attitudinal data produced by workers who opt to complete an online
climate survey. This study addresses our call for the in-depth analysis of Internet-based data
collection techniques used in organizational contexts. It is noteworthy how they went about
examining their research questions: They demonstrated how creative and multifaceted
(both qualitative and quantitative) approaches can advance our understanding of issues pertaining to nonresponse in ways that single modality studies can not.
Werner, Praxedes, and Kim (in press) provided an interesting snapshot of the organizational science literature and its propensity to conduct the type of N-BIAS analyses we advocate below. They reviewed publications from nine journals over 5 years. Only 31% of the
relevant survey studies in their sample reported some type of nonresponse analysis. They also
found that articles in low-quality journals, articles in journals with long review times, and articles with larger response rates were less likely to report nonresponse analysis. Although they
acknowledge that nonresponse analyses may be being performed behind the scenes for
reviewers without a report of the results appearing in the published article, this trend should
be of great concern to organizational science researchers. It provides compelling fuel to our
claim above that our principal journals should develop editorial statements addressing nonresponse and expectations on how the authors should address it in their research.
To conclude, we offer our sincere thanks to all of the authors who submitted proposals
and manuscripts to the feature issue, as well as to all of the scholars who volunteered to
serve as reviewers for these articles. As a group, these articles comprised a large measure
of timely and high-quality scholarship, and we regret that space and scheduling limitations
precluded inclusion of more of this fine work in the feature topic. To the authors whose articles do appear in this issue we offer our congratulations for persevering through a challenging review process, and to the editorial staff of the journal we owe a debt of gratitude
for your assistance and support in bringing this feature topic to fruition.
References
Allen, N. J., Stanley, D. J., Williams, H., & Ross, S.J. (in press). Assessing the impact on non-response on work
group diversity effects. Organizational Research Methods.
Armstrong, J. S., & Overton, T. S. (1977). Estimating nonresponse bias in mail surveys. Journal of Marketing
Research, 14 (Special Issue: Recent Developments in Survey Research), 396-402.
Baruch, Y. (1999). Response rate in academic studies–A comparative analysis. Human Relations, 52(4), 421-438.
Bosnjak, M., Tuten, T. L., & Wittman, W. W. (2005). Unit (non)response in Web-based access panel surveys:
An extended planned-behavior approach. Psychology and Marketing, 22, 489-505.
Cook, C., Heath, F., & Thompson, R. L. (2000). A meta-analysis of response rates in Web- or Internet-based
surveys. Educational and Psychological Measurement, 60, 821-836.
Crossley, H. M., & Fink, R. (1951). Response and non-response in a probability sample. International Journal
of Opinion & Attitude Research, 5, 1-19.
Deming, W. E. (1953). On a probability mechanism to attain an economic balance between the resultant error
of response and the bias of nonresponse. Journal of the American Statistical Association, 48, 743-772.
Dillman, Don A. (2000). Mail and Internet surveys: The tailored design method. New York: John Wiley.
Downloaded from http://orm.sagepub.com by on May 9, 2007
© 2007 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
208
Organizational Research Methods
Fowler, F. J. (2002). Challenges for standardizing interviewing. Contemporary Psychology: APA Review of
Books, 47(4), 405-407.
Fox, R. J., Crask, M. R., & Kim, J. (1988). Mail survey response rate: A meta-analysis of selected techniques
for inducing response. Public Opinion Quarterly, 52, 467-491.
Groves, R., Presser, S., & Dipko, S. (2004). The role of topic interest in survey participation decisions. Public
Opinion Quarterly, 68, 2-31.
Groves, R. M., Singer, E., & Corning, A. (2000). Leverage-saliency theory of survey participation: Description
and an illustration. Public Opinion Quarterly, 64, 299-308.
Heberlein, T. A., & Baumgartner, R. (1978). Factors affecting response rates to mailed questionnaires: A quantitative analysis of the published literature. American Sociological Review, 43, 447-462.
James, J., & Bolstein, R. (1990). The effect of monetary incentives and follow-up mailings on the response rate
and response quality in mail surveys. Public Opinion Quarterly, 54, 346-361.
Luong, A., & Rogelberg, S. G. (1998). How to increase your survey response rate. The IndustrialOrganizational
Psychologist, 36, 61-65.
Kraut, A. I. (1996). Introduction: An overview of organizational surveys. In A. I. Kraut (Ed.), Organizational
surveys: Tolls for assessment and change (pp. 1–14). San Francisco: Jossey-Bass.
Lyness, K. S., & Kropf, M. B. (in press). A multilevel examination of cross-national differences in mail survey
response rates. Organizational Research Methods.
Peiperl, M. A., & Baruch, Y. (1997). Models of careers: Back to square zero. Organizational Dynamics, 35(4), 7-22.
Rogelberg, S. G. (2006). Understanding nonresponse and facilitating response to organizational surveys. In
A. I. Kraut (Ed.), Getting action from organizational surveys: New concepts, methods, and applications
(pp. 312-325). San Francisco: Jossey-Bass.
Rogelberg, S. G., Church, A. H., Waclawski, J., & Stanton, J. M. (2002). Organizational survey research:
Overview, the Internet/intranet and present practices of concern. In S. G. Rogelberg (Ed.), Handbook of
research methods in industrial and organizational psychology (pp. 141-160). Oxford, UK: Basil Blackwell.
Rogelberg, S. G., Conway, J. M., Sederburg, M. E., Spitzmuller, C., Aziz, S., & Knight, W. E. (2003). Profiling active
and passive-non-respondents to an organizational survey. Journal of Applied Psychology, 88(6), 1104-1114.
Rogelberg, S. G., Fisher, G. G., Maynard, D., Hakel, M. D., & Horvath, M. (2001). Attitudes toward surveys:
Development of a measure and its relationship to respondent behavior. Organizational Research Methods,
4, 3-25.
Rogelberg, S. G., & Luong, A. (1998). Nonresponse to mailed surveys: A review and guide. Current Directions
in Psychological Science, 7, 60-65.
Rogelberg, S. G., Luong, A., Sederburg, M. E., & Cristol, D. S. (2000). Employee attitude surveys: Examining
the attitudes of noncompliant employees. Journal of Applied Psychology, 85(2), 284-293.
Rose, D. S., Sidle, S., & Griffith. (in press). A penny for your thoughts: Monetary incentives improve response
rates for company-sponsored employee surveys. Organizational Research Methods.
Rubin, D. (1987). Multiple imputation for nonresponse in surveys. New York: John Wiley.
Stanton, F. (1939). Notes on the validity of mail questionnaire returns. Journal of Applied Psychology, 23, 170-187.
Stanton, J. M., & Rogelberg, S. G. (2001). Using Internet/intranet Web pages to collect organizational research
data. Organizational Research Methods, 4, 199-216.
Stanton, J. M., Sinar, E. F., Balzer, W. K., & Smith, P. C. (2002). Issues and strategies for reducing the length
of self-report scales. Personnel Psychology, 55(1), 167-193.
Suchman, E. A., & McCandless, B. (1940). Who answers questionnaires? Journal of Applied Psychology, 24,
758-769.
Thompson, L. F., & Surface, E. (in press). Employee surveys administered online: Attitudes toward the
medium, nonresponse, and data representativenes. Organizational Research Methods.
Tomaskovic-Devey, D., Leiter, J., & Thompson, S. (1994). Organizational survey response. Administrative
Science Quarterly, 39, 439-457.
Weiner, S. P., & Dalessio, A. T. (2006). Oversurveying: Causes, consequences, and cures. In A. I. Kraut (Ed.),
Getting action from organizational surveys: New concepts, methods, and applications (pp. 294-311). San
Francisco: Jossey-Bass.
Werner, S., Praxedes, M., & Kim, H. G. (in press). The reporting of nonresponse analyses in survey research.
Organizational Research Methods.
Downloaded from http://orm.sagepub.com by on May 9, 2007
© 2007 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
Rogelberg, Stanton / Nonresponse Impact Analysis
209
Yammarino, F. J., Skinner, S. J., & Childers, T. L. (1991). Understanding mail survey response behavior: A
meta-analysis. Public Opinion Quarterly, 55, 613-639.
Yu, J., & Cooper, H. (1983). A quantitative review of research design effects on response rates to questionnaires.
Journal of Marketing Research, 20, 36-44.
Steven G. Rogelberg is a professor of organizational science and psychology at the University of North
Carolina at Charlotte. He also serves as director of Organizational Science and director of
Industrial/Organizational Psychology. He has more than 50 publications addressing topics like organizational
research methods, team effectiveness, employee well-being, work meetings, and organizational development.
Jeffrey M. Stanton, PhD, is an associate professor at Syracuse University. He studies organizational behavior
and technology, in particular how behavior affects security and privacy in organizations. He is author of “The
Visible Employee: Using Workplace Monitoring and Surveillance to Protect Information Assets–Without
Compromising Employee Privacy or Trust” (2006, Information Today).
Downloaded from http://orm.sagepub.com by on May 9, 2007
© 2007 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
Purchase answer to see full
attachment