Running Head: WORKPLACE SAFETY RESEARCH
1
Workplace Safety Research
Lina Namu
BUS. 642
December 15, 2014
[no notes on this page]
-1-
WORKPLACE SAFETY RESEARCH
2
The research project is on the study of workplace safety. The concern about the topic is
due to the issues that having surrounding the issue of health and safety in several organizations.
There are several changes in the measures related to the workplace safety programs. Safety has
now become an integral part of the organization and has been a major determinant in the
company’s quality assurance systems. There has been pressure from different quarters on the
establishment of the link between productivity and performance in organizations and the
standard of health and safety employed in such organizations. This has been costs associated
with these issues. The costs have either the direct or the indirect effect on the organization. These
create the reason for conducting the research on safety in the workplace.
1
The management dilemma in this case is on how organization can be able to link
production and quality with the quality of safety in the organization. The issue of safety in the
workplace has become a common problem in all types of organization. The management has not
been able to address the issue in fullness or even come up with procedures and policies that will
ensure that they meet the threshold of improving the health and being able to relate it directly to
1. this
Where are your section
headings? These need to be
in place so I can delineate
that each part of the
assignment was covered. If I
have to go looking for such
thing the grade is generally
lower. [Jon Webber]
quality and production.
Research question
2
The research question in the research is: does the workplace safety affect the production
and quality in the organization? The research will seek to answer the question in an elaborate
2. does
Does the workplace ..... [Jon
Webber]
manner and ensure that the data gotten is relevant and convincing.
Hypothesis
3
In the research project, there is a hypothesis that Poor workplace safety leads to low
quality production. The low quality of the products on the other hand will therefore reduce the
-2-
3. that
State this stronger like: The
hypothesis for this study is ...
[Jon Webber]
WORKPLACE SAFETY RESEARCH
3
revenue and incomes of the organization. This therefore lowers the employee likeliness to
improve the quality in turn making it hard to replace the original revenues and profits.
Literature review
National Research Council (U.S.) and Institute of Medicine (U.S.) (2009) conducted a
research to evaluate the occupational health and safety. The main aim was to come up with the
ways and the researches that directed towards enhancing the health and safety surveillance in the
organizations. The researchers look and evaluate different programs related to the health and
safety. The authors give recommendation that can help organizations understand whether they
1. Authors
Delete Authors here [Jon
meet the threshold for safety programs.
Webber]
1
2
Authors LaTourrette, Mendeloff and Rand Corporation looks at ways which
organizations can increase safety in the workplace. Using the RAND center as the vocal point for
the research, the authors place it clear that there is a relationship between improvement of
understanding of the health and safety and the workers in any organization. The authors show
that there are monetary benefits related to combining the two (2008). These two articles show the
need to have further analysis of the effectiveness of the research in the connection of the health,
safety and benefits to the organization.
Ethical concerns
Ethical issues in a research are the norms that help the researchers to coordinate actions
or activities in the research to establish trust and discipline among the researchers. The research
will have some ethical concerns that will govern how to engage in the business research. They
are the ethical issues to govern the research. As a research, there appropriate to stand to the
ethical norms in the research.
-3-
2. looks
looked [Jon Webber]
WORKPLACE SAFETY RESEARCH
4
Some of the ethical concerns include fabrication of information. It is likely that during
the research and data collection, some information found from different respondents or those
who give the information may have some fabrication and may not be in line with the required
information about the safety and health in workplace. Secondly, the information might include
1
some falsification with the lack of meeting the required standards used in representing the
research. The falsified information might also be a misrepresentation of research data. The
1. some
Support this information with
sources, too [Jon Webber]
research data is however supposed to promote the truth and avoid any error resulting from such
unethical activities that the research comes across during the research. Responses coming from
employees, the management as well as other stakeholders may not present the whole truth or
falsification with the aim of hiding some information for not meeting certain required standards.
The other concern will be on cooperation and coordination of the research process. In the
research, one will have to deal with different people in different disciplines and institutions with
the aim of getting to access different data from the institutions. Ethical standards will be essential
in promoting the values in the collaborative work (Goodwin, 2011). Additionally, these would
also be essential in promoting the trust among the members as well as accountability. In addition,
in a collaborative work there will be the requirements of issues like the mutual respect among the
members of the team as well as the fairness in dealing with each of the team or institutions.
The issue of copyrights, authorship and the data hiring policies are some of the
regulations and standards that the research will have to contend with especially on intellectual
2
property interests (Landrum 2014). The author will also have to deal with the federal ethical
policies on research misconduct, conflict of interests as well as human rights and social
responsibility. The researcher will have to show being responsible in accordance to the health
-4-
2. author
Researcher [Jon Webber]
WORKPLACE SAFETY RESEARCH
5
and safety regulations and laws. Ethical lapses in research can significantly harm the rights of the
researchers as well as those of the researchers and the public as a whole.
1
Some of the preliminary thoughts about the research design will give the do’s and the
don’ts for the framework of the research. The design will therefore include choosing the topic as
selected, preparation of the proposal including background and significance to give the
background of topic as well as selecting the research design methods including the major
experimental challenges (Bernard, 2011). Design will also include the discussion of expected
2
results in the context of the hypothesis. This will help to include any conceivable data to the
research ensuring that the research is up to the standards.
Not all methods may be suited to this research project. In approaching, the sampling data
for this research will use the qualitative data analysis with the aim of bringing together
1. preliminary
What will you use:
quantitative or qualitative or
both? This needs to be
explained and detailed better.
This and the next section are
very much underdeveloped.
[Jon Webber]
2. results
Where are your section
headings for the new
material? These need to be in
place so I can delineate that
each part of the assignment
was covered. If I have to go
looking for such thing the
grade is generally lower.
[Jon Webber]
3
collections of statistics to help understanding the meanings, beliefs and experience as brought by
the research.
-5-
3. statistics
This is too preliminary and
does not really set out how
you will sample. [Jon
Webber]
WORKPLACE SAFETY RESEARCH
6
1
References
Bernard, H (2011). Research methods in anthropology: Qualitative and quantitative
1. References
I do not see three scholarly
sources here. Which ones do
you think would be included in
those categories? [Jon
Webber]
2
approaches. Lanham, MD
Goodwin, K. (2011). Designing for the Digital Age: How to Create Human-Centered Products
2. Lanham,
There is missing information
on this reference. [Jon
Webber]
3
and Services. Chichester: John Wiley & Sons.
3. Chichester:
State? [Jon Webber]
Landrum, R. E. (2014). Research methods for business: Tools and applications. San Diego, CA:
Bridgepoint Education, Inc.
LaTourrette, T., Mendeloff, J. M., & Rand Corporation. (2008). Mandatory workplace safety and
health programs: Implementation, effectiveness, and benefit-cost trade-offs. Santa
Monica, CA
National Research Council (U.S.). & Institute of Medicine (U.S.). (2009). Evaluating
occupational health and safety research programs: Framework and next steps.
Washington: National Academies Press.
-6-
8
Effective Survey and
Questionnaire Research
PRNewsFoto/NADAguides
Learning Objectives
After reading this chapter, you should be able to:
• Discuss the challenges faced when selecting individuals from the population to be studied, including the difference between probability and nonprobability sampling approaches.
• Identify key research methods and compare and contrast different methodological approaches.
• List the major approaches to survey research design.
• Present the key issues regarding the analysis of survey data, including issues related to the types of
possible errors, challenges in handling data, and consideration of different data analytic techniques.
• Identify the tips provided about survey construction in order to create or edit surveys, attending to
the details necessitated by open-ended and closed-ended survey items.
• Describe the major types of scales used in business research.
• Explain the role of pilot testing prior to launching into a business research project.
217
• Discuss the benefits and limitations to using surveys.
Lan81479_08_c08_217-246.indd 217
5/22/14 2:24 PM
Pre-Test
Pre-Test
1. A researcher who stops people on the street to survey them is using
a. probability sampling.
b. nonprobability sampling.
c. simple random sampling.
d. stratified sampling.
2. Which two survey methods can have drawbacks in terms of coverage?
a. in-person interviews and mailed surveys
b. online surveys and mailed surveys
c. in-person interviews and telephone interviews
d. online surveys and telephone interviews
3. A survey that collects data at one point in time is known as a
a. time series survey design.
b. longitudinal study.
c. mixed mode survey approach.
d. cross-sectional survey.
4. Asking survey questions that are not specific enough is considered a form of
measurement error.
a. true
b. false
5. Surveys are useful tools for understanding respondents’ skills.
a. true
b. false
6. A survey provides a statement and then asks respondents whether they completely
disagree, generally disagree, generally agree, or completely agree with that statement.
This is an example of a(n)
a. Likert scale.
b. dichotomous scale.
c. Likert-type scale.
d. semantic differential scale.
7. In pilot testing, researchers should check that their survey includes an appropriate
vocabulary level, along with an adequate font size.
a. true
b. false
8. One advantage of surveys, compared to other research methods, is the greater control
they offer over variables of interest.
a. true
b. false
Lan81479_08_c08_217-246.indd 218
5/22/14 2:24 PM
Sampling the Population
Section 8.1
Answers
1. b. nonprobability sampling. The correct answer can be found in Section 8.1.
2. d. o
nline surveys and telephone interviews. The correct answer can be found in
Section 8.2.
3. d. cross-sectional survey. The correct answer can be found in Section 8.3.
4. a. true. The correct answer can be found in Section 8.4.
5. b. false. The correct answer can be found in Section 8.5.
6. c. Likert-type scale. The correct answer can be found in Section 8.6.
7. a. true. The correct answer can be found in Section 8.7.
8. b. false. The correct answer can be found in Section 8.8.
Introduction
A utility infielder in baseball is a player who can play many different positions, which makes
this player extremely versatile and valuable. In many ways, survey research methods may
play the same role in the tool belt of the business researcher. The survey is a wonderful tool in
many business situations, and it is versatile and adaptable. In fact, either conducting survey
research or helping colleagues interpret survey research may be one of your most frequent
forays into business research methods.
Knowledge about survey research design and its many variations on a theme can be particularly helpful to business researchers for different reasons. First, from a general research
perspective, the survey method is a popular approach to uncovering the attitudes, beliefs,
opinions, or perceptions of multiple stakeholders in the business arena. Surveys can be particularly efficient in gathering large amounts of information from large numbers of individuals
in order to draw general conclusions or formulate a snapshot of current conditions. However,
even if one is not a producer of business research, businesspeople need to be savvy consumers of business research. As new research findings are made available in journals, magazines,
in-service workshops, and national conferences, those in business need to understand the
fundamental concepts of research in order to make learned decisions about the viability of
research—that is, to what extent findings are to be believed and acted upon. Given the ubiquitous nature of survey research throughout the business world, a deeper understanding of
survey methodology, sampling, survey scales, item construction, and online tools is essential.
8.1 Sampling the Population
Why sample? If the goal is to understand how a population thinks, acts, feels, and so on, then
why not study the entire population? First, researchers often do not have comprehensive lists
of members of a population. Say, for example, that a researcher wanted to survey all customers who use iPhones. Is there a comprehensive list available? The telephone service providers
might be a good start, but names and addresses are unlikely to be part of the public record.
Additionally, some iPhone® users have switched to Android™, and some Android users have
switched to iPhones.
Lan81479_08_c08_217-246.indd 219
5/22/14 2:24 PM
Sampling the Population
Section 8.1
The ultimate goal of sampling the population is so a representative portion of the population
can be studied. Thus, by studying the sample carefully and methodically, generalizations can
be drawn about the variables or behaviors of interest in the greater population. Two major
types of sampling approaches exist—probability sampling and nonprobability sampling.
In probability sampling, potential survey participants have an equal chance of being selected;
that is, their selection is based on a probability (such as 1 out of 100). In nonprobability sampling, there is no equal chance guaranteed. The research surveyor at the mall stopping shoppers is using nonprobability sampling because not all potential shoppers even visit the mall.
There are other methodological issues as well. Because of the mathematics and probability
behind sampling theory, very good samples can be drawn from populations with relatively
small margins of error. It is possible to estimate the percentage of high school graduates, within
+/–3 percentage points, in a small county of 25,000 adults by collecting 1,024 completes (completed surveys), and to measure the same among the U.S. population of more than 300 million
by collecting 43 more surveys (Dillman, Smyth, and Christian, 2009). Sampling is efficient.
Lastly, surveying an entire population might lead to a greater number of nonrespondents.
Survey researchers become concerned about nonrespondents because if bias drives a person’s choice not to complete the survey, it may weaken the validity of the data (Dillman et
al., 2009). Business researchers are better suited to select a sampling procedure that allows
them to estimate any potential of sampling error in order to obtain a representative sample
while minimizing bias and high nonresponse rates. Probability sampling strives to achieve
each of those goals.
Probability Sampling
The overarching goal of probability sampling is that the sample drawn will be representative
of the population if all members of that population have an equal probability of being selected
for the sample. Often, one hears this stated as a nonzero probability (StatPac, 2009; StatTrek,
2009), meaning that there is a chance for every person to be selected, no matter how slim that
chance might be. There are a variety of different approaches to probability sampling, including simple random sampling, systematic sampling, stratified sampling, cluster sampling, and
multistage sampling.
Simple Random Sampling
The simple random sample is perhaps the purest form of sampling, and also probably one
of the rarest techniques used. Suppose a researcher had a roster of the entire population
available. He or she could assign random numbers to all possible participants and then use a
random number table to select the sample. In this situation, everyone in the survey population has the same probability of being chosen (Edwards & Thomas, 1993).
Systematic Random Sampling
In a systematic random sample, every nth person from a list is selected (Edwards &
Thomas, 1993). Let’s say that a company has 2,000 employees, and the researcher determines that 100 surveys should be completed. Each employee would have an equal chance
of being selected to complete the survey; therefore, the probability of being selected is n/N
(Lohr, 2008), or in this example, 100/2000, or 1 out of every 20 employees. So, every 20th
Lan81479_08_c08_217-246.indd 220
5/22/14 2:24 PM
Sampling the Population
Section 8.1
employee would be selected. After determining a random starting point (let’s say #4, for
example), every 20th employee on the roster is selected, meaning the 4th, 24th, 44th, 64th,
84th, 104th, 124th, and so forth (Chromy, 2006).
Stratified Sampling
Stratified sampling involves an approach where extra precautions are taken to ensure representativeness of the sample. Strata define groups of people who share at least one common characteristic that is relevant to the topic of the study (StatPac, 2009). The term “strata”
is the plural of stratum; a study could have one stratum, or multiple strata. For example, a
researcher might want to ensure that the selected sample is representative based on gender,
so he or she would “stratify” on gender. If the researcher knows that 55% of the population
is female and 45% of the population is male, then random sampling within a gender stratum
is used to extract a sample that precisely matches the gender breakdown of the population.
By using relevant strata, sometimes oversampling is used to decrease sampling error from
relatively small groups; that is, researchers may choose to oversample from groups less likely
to respond (Edwards & Thomas, 1993). In a survey of CEOs where CEO respondents of a
particular manufacturing sector were desired, perhaps there is a very small percentage of
female CEOs. If that were the case, a researcher might oversample female CEOs, in hopes
that the resulting sample would be representative of the population. If the percentages in the
population match the sample strata selected (as in the gender example), this is proportionate
stratification; if oversampling is used, this practice is considered disproportionate stratification (Henry, 1990).
Cluster Sampling
When thinking about random sampling, researchers tend to think about how each person
should have an equal chance of being selected or not selected for a sample. Rather than think
about the individual person level, cluster sampling addresses how groups of people may
or may not be selected for inclusion in a study. That is, individual clusters could be neighborhoods or businesses or schools or counties, and so on. Publishing companies often have
representatives who work in a particular region of the country, and bookstores within a certain region could be considered a cluster. To determine if a new inventory control program is
effective at controlling theft and ensuring accurate pricing at purchase, some clusters could
be selected to implement the new inventory control program, and other (yet similar) clusters might serve as control conditions for comparison. In this case, individuals are not being
tested, but clusters of stores within a particular region are being tested. Once the bookstores
are assigned to a group or cluster, then the entire cluster is selected or not selected at random (Edwards & Thomas, 1993). The cluster sample technique is particularly useful when it
is impossible or impractical to compile an exhaustive list of members comprising the target
population (Babbie, 1973; Henry, 1990).
Multistage Sampling
Multistage sampling describes a process that follows after cluster sampling has been implemented. For instance, if you are collecting data from high school seniors, is every high school
senior within the selected school/cluster surveyed, or is a systematic random sample drawn?
In essence, the multistage sampling approach is two-stage sampling, involving (a) the selection
Lan81479_08_c08_217-246.indd 221
5/22/14 2:24 PM
Sampling the Population
Section 8.1
of clusters as a primary selection, and (b) sampling members from the selected clusters to produce the final sample (Chromy, 2006; Henry, 1990).
Nonprobability Sampling
Nonprobability sampling means just that: It is unknown what the probability is of each possible participant in the population to be selected for the study. With nonprobability sampling,
sampling error cannot be estimated (StatPac, 2009). Two key advantages to nonprobability
sampling, however, are cost and convenience (StatTrek, 2009). The main approaches utilizing
nonprobability sampling are convenience sampling, quota sampling, snowball sampling, and
a volunteer sample.
Convenience Sampling
Convenience samples are just that—convenient. If a researcher were to email everyone on
their contact list and ask survey questions, this would comprise a convenience sample. This
technique is often used in exploratory research where a quick and inexpensive method is
used to gather data (StatPac, 2009). Researchers have long relied on convenience samples to
collect data. For example, if all customers who walk into a restaurant were asked to complete
a “customer service card,” the individuals who visit the restaurant would be considered a
convenience sample.
Quota Sampling
Quota sampling as a nonprobability sampling technique is the parallel equivalent of stratified sampling from the probability sampling world. In stratified sampling, one identifies
characteristics of interest, and then the researcher strives to ensure that the individuals
selected represent the population of interest in a proportional manner. In quota sampling,
the researcher also desires the strata of interest but recruits individuals (nonrandomly) to
participate in a study (StatPac, 2009). Thus, quotas are filled with respect to the key characteristics needed for survey participants from the population.
Snowball Sampling
When using the snowball sample technique, members of the target population of interest
are asked to recruit other members of the same population to participate in the study. This
procedure is often used when there is no roster of members in the population, and those
members may be relatively inaccessible, such as illegal drug users, pedophiles, or members
of a cult (Fife-Schaw, 2000). Snowball sampling relies on referrals, and may be a relatively
low-cost sampling procedure (StatPac, 2009), but there is a high probability that the individuals who participate may not be representative of the larger population.
Volunteer Sample
Volunteering is commonly used for soliciting survey participation, but often the results are
quite limited due to the possible motivational differences between volunteers and nonvolunteers. When a popular website posts a survey and invites volunteers to participate, the
explanatory and predictive power of the data gathered may be suspect (StatTrek, 2009). It is
Lan81479_08_c08_217-246.indd 222
5/22/14 2:24 PM
Survey Research Methods
Section 8.2
difficult to make confident generalizations from a sample to a population when nonprobability samples are employed, and even less confidence exists if a volunteer sample is utilized.
With one piece of the survey puzzle in place (sampling), the next section presents the major
survey research approaches or strategies that are commonly used.
8.2 Survey Research Methods
Successful research is a multistep process. Even after choosing a research design (either
experimental or quasi-experimental) and determining the sampling plan, there is still much
to be done. The next step is to determine the specific methods by which data are collected.
This section provides an overview about the choices that survey researchers must answer
concerning this important piece.
Interviews
In some ways, in-person interviews remain the gold standard in survey research. Interviews
have fewer limitations about the types and length of survey items to be asked, and trained interviewers can use visual aids during the interview, so that the interviewee can see, feel, or taste
a product (Creative Research Systems, 2009; Frey & Oishi, 1995). Interviews are thought to be
one of the best ways to obtain detailed information from survey participants. With an in-person
interview, the interviewer and the participant can build rapport through conversation and eye
contact, which might allow for deeper questions to be asked about the topic of interest.
The drawbacks of interviewing include high costs and the reluctance of individuals to take
the time to complete an interview (Creative Research Systems, 2009; Frey & Oishi, 1995). In
addition to one-on-one interviews that may be prearranged, there are also intercept interviews, such as those seen at a mall where an interviewer intercepts shoppers and asks them
questions. The level of intimacy that can be achieved with an in-person interview could also
be a drawback for some people. For instance, introverts might be more willing to express
their feelings within a group setting rather than during a one-on-one interview. There are also
group interviews, or focus groups, where a group of people are interviewed at the same time.
The reluctance to participate in in-person interviews led to the growth of using the telephone
as a modality of conducting survey research (Tuckel & O’Neill, 2002). The use of telephone
methodology has increased over time, but it faces a number of challenges today. For example,
coverage has always been a concern of telephone research. That is, the greater percentage
of homes with a telephone, the better the survey coverage, and the better the possibility of
drawing a representative sample from the population of interest. Notice how telephone coverage in the United States has changed over time (Kempf & Remington, 2007):
•
•
•
•
•
Lan81479_08_c08_217-246.indd 223
In 1920, 65% of households did not have a telephone.
In 1970, 10% of households did not have a telephone.
In 1986, 7–8% of households did not have a telephone.
In 2003, less than 5% of households did not have a telephone.
In 2009, 4.7% of households did not have a telephone (Belinfante, 2010).
5/22/14 2:24 PM
Survey Research Methods
Section 8.2
As you can see, coverage is quite good with regards to households with a phone, but there are
multiple challenges for researchers today, such as working within the context of Do Not Call
lists. Additionally, the growth of cell phone usage is changing the face of telephone survey
research. There are current laws that limit the solicitations made via cellular calls because
some recipients of those calls must pay for each call received. Answering machines, caller ID,
privacy managers, and call blocking services all add to the increasing challenges of conducting
survey research by telephone. However, researchers continue to develop new strategies for
improving the efficiency of telephone surveys, such as by using computer-assisted telephone
interviewing (CATI) systems, random digit dialing (RDD), and interactive voice response systems (“Press 1 if you are . . . “).
Mailed Surveys
Mailed surveys remain a viable way of collecting data, and there are both advantages and
disadvantages to using this mode. The advantages of mail surveys include (a) relatively low
cost per survey respondent, as mailed surveys can be processed with a relatively small staff;
(b) no time pressure for respondents; (c) the use of visual stimuli, including different scaling
techniques and visual cues for survey completions; (d) the removal of the potential effect
(bias) of the interviewer; (e) participants have greater privacy; and (f) if a good sample frame
is available with a mailing list, the benefits of random sampling techniques can be utilized
(Dillman et al., 2009).
The potential disadvantages of mail surveys include (a) potentially low response rates; (b) limited capabilities for complex questions and the inability for an interviewer to clarify questions
being answered; (c) when mail is delivered to a household, there is no guarantee that the person
for whom the survey is intended is the person completing the survey; and (d) the turnaround
time for receiving mailed survey responses can be long (de Leeuw & Hox, 2008).
Online Surveys
As a comparison to paper-and-pencil surveys, online surveys offer a number of advantages,
including (a) easy and inexpensive distribution to large numbers of individuals via email,
(b) the participant is guided through the survey by essentially filling out a form (i.e., skip
patterns are hidden from view), and (c) digital resources (e.g., video clips, sound, animation) can be incorporated into the survey design, and questions can be “required” to be
answered as well as verified instantly (e.g., when asked for a birth year, if something other
than a four digit number is entered, the participant can be instantly prompted to use the
correct format, and prevented from proceeding until making the correction) (Beidernikl &
Kerschbaumer, 2007).
There are a number of survey tools available to assist in the collection of data: Two of the
more popular choices are SurveyMonkey (www.surveymonkey.com) and Qualtrics (www
.qualtrics.com), although there are many others, including QuestionPro, Zoomerang, KeySurvey, SurveyGizmo, and SurveyMethods. Many of these websites will allow free account
creation and use on a limited basis to design a survey and collect data with that survey. After
creating the survey, the business researcher can use the software to create a custom URL that
can be emailed to potential participants or posted on a website. One of the advantages to
online survey software is that the results can usually be downloaded directly into an Excel™
file for later analysis (or other types of data files, such as SPSS® files). Also, some of the sites
Lan81479_08_c08_217-246.indd 224
5/22/14 2:24 PM
Survey Research Methods
Section 8.2
can assist with rudimentary data analysis, as well as creating graphs and charts, without
exporting the data.
Two key drawbacks of online surveys are issues of coverage and nonresponse (de Leeuw &
Hox, 2008). The issue of coverage, or who has Internet access and who does not, is sometimes
referred to as the digital divide (Suarez-Balcazar, Balcazar, & Taylor-Ritzler, 2009). Some specific examples of the drawbacks of coverage include: (a) individuals from low-income and
working-class communities are less likely to have access to the Internet; (b) low-income,
working-class, culturally diverse individuals are more likely to have only one computer, which
limits the potential for completing Internet-based surveys; (c) limited access often translates
into limited familiarity with online applications; and (d) there may be cultural barriers that
make Internet research more difficult to successfully accomplish (Suarez-Balcazar et al.,
2009). With regard to those who do not answer the survey (called nonrespondents), this
can be a tricky situation to address. It is difficult to know why a person does not respond to
a survey request; given the nonresponse, it is doubly difficult to ask individuals why they did
not respond. If the nonresponse rate is relatively low, then nonresponse is likely not to be a
major issue; no large-scale survey garners 100% participation. However, if the nonresponse
rate is 80%, then researchers need to wonder about and explore why only 20% of the sample
is responding, and if there is something systematically happening that would inhibit possible
participants from responding.
In addition to coverage, there is also the challenge of representativeness, and an Internet
survey approach may not achieve the level of representativeness desired (Beidernikl & Kerschbaumer, 2007; de Leeuw & Hox, 2008). In fact, one can think about whether those replying to an Internet survey are representative of the entire population, representative of the
Internet population, or even representative of a certain targeted population (Beidernikl &
Kerschbaumer, 2007). Add in the complexity of culture, and one can see that well-designed
Internet surveys can take a significant amount of work. Consider this example—even though
it is not a business example, it is an excellent example of the challenges of representativeness:
For instance, in the Chicago Public Schools, students speak over 100 different languages and dialects. Social scientists planning studies in these types of
settings must consider how they are going to communicate with the participants’ parents. Although children of first generation immigrants may be able
to speak, read, and participate in Internet-based surveys in English, information such as consent forms and research protocols that are sent to the parents
may need to be translated into their native language and administered using
paper-and-pencil format. (Suarez-Balcazar et al., 2009, p. 99)
If they do not use their tools carefully, online survey researchers are capable of invading privacy (by providing for anonymous responding via the Internet), and care should be taken to
minimize the threat to invasion of privacy (Cho & LaRose, 1999). That is, if survey researchers
promise anonymity and confidentiality, then those researcher promises must be upheld.
Comparison of Methodologies
With all the options of survey administration, the natural question arises: Which approach
is best? The answer is that it depends. However, there have been studies conducted that
compare the different methodologies. It has been found that, on average, web-based
Lan81479_08_c08_217-246.indd 225
5/22/14 2:24 PM
Survey Research Design
Section 8.3
surveys have an 11% lower response rate as compared to mailed and telephone surveys (de
Leeuw & Hox, 2008). In an experiment that directly compared regular mail and email surveys, researchers found comparable response rates—57.5% for regular mail, and 58.0% for
email (Schaefer & Dillman, 1998). In comparing telephone surveys and web-based surveys,
a two-wave web-based approach provided more reliable data estimates than telephone
surveys, and at a lower cost—each telephone survey cost $22.75 to complete, whereas the
cost of each web-panel survey was $6.50 (Braunsberger, Wybenga, & Gates, 2007).
What does the future hold for preferred survey research modality? In addition to comparison
studies, a growing trend is to utilize a mixed mode approach, where multiple modalities are
accessed to achieve the research goals (Nicolle & Lou, 2008). Thus, participants may receive
email reminders to participate in a telephone survey. The mixed mode approach can also utilize the collection of qualitative and quantitative data. Qualitative data, such as the responses
to open-ended questions on a survey, can provide particularly rich and useful information
and are often the most helpful when we know the least about a topic.
If the sampling plan and survey modality are in place, another decision to be made is the
overall design of the survey research. In some regard, these concepts overlap with topics from
Chapter 7. However, a brief review of how these design decisions affect survey research is
warranted here.
8.3 Survey Research Design
Although different researchers may use slightly different terminology, the major categories
of survey research designs, which include cross-sectional surveys (conducted at one point in
time) and longitudinal surveys (conducted over time), are presented in this section.
Cross-Sectional Survey
In a cross-sectional survey design, data collection occurs at a single point in time with the
population of interest (Fife-Schaw, 2000; Visser, Krosnick, & Lavrakas, 2000). One way to think
about a cross-sectional survey is that it is a snapshot in time (Fink & Kosecoff, 1985). Regarding the benefits of cross-sectional surveys, they are relatively inexpensive and relatively easy
to do (Fife-Schaw, 2000; Fink & Kosecoff, 1985). However, if the landscape changes rapidly,
and the amount of change is important to a researcher’s study, then using a cross-sectional
design will not capture this change over time (Fife-Schaw, 2000; Fink & Kosecoff, 1985).
Longitudinal Survey
A longitudinal survey is conducted across time. The key advantage of longitudinal designs
is that they allow for the study of age-related development. However, this can be confounded
with events during a lifetime that might influence your variables (Fife-Schaw, 2000). Longitudinal studies also face unique challenges, such as keeping track of respondents over time
and deciding how to motivate them to continue to respond in the future (Dillman et al., 2009).
Attrition (dropping out of the study over time) is a drawback, and participants repeatedly
tested can be susceptible to the demand characteristics of the research. Having participated
Lan81479_08_c08_217-246.indd 226
5/22/14 2:24 PM
Analysis of Survey Data
Section 8.4
multiple times in the past, the participants know what is expected and probably understand
the variables and general hypotheses being tested (Fife-Schaw, 2000).
In a cohort study, new samples of individuals are followed over time (Jackson & Antonucci,
1994). In a panel study, the same people are studied across time, spanning at least two points
in time (Fink & Kosecoff, 1985; Jackson & Antonucci, 1994; Visser et al., 2000). This type of
study can be particularly useful in understanding why particular changes occur longitudinally because researchers ask the same participants to respond over and over (there is also a
baseline comparison measure from when they first entered the study).
8.4 Analysis of Survey Data
In most respects, analyzing survey data is the same as analyzing any other type of data; analysis choices are based on the hypotheses, scales of measurement, tools available for data analysis, and so on. Before mentioning specific approaches for data analysis, let’s review the types
of errors that are encountered in survey research. Remember that “errors” in this context are
not mistakes but are the possible outcomes of the study that researchers cannot account for,
or the changes or values of the dependent variable that are not due to the independent variables being manipulated, controlled, or arranged.
Types of Errors
In classic measurement theory, the total amount of error is assumed to be the sum of measurement error + sampling error (Dutka & Frankel, 1993). Those who study survey research
design further categorize the types of threats and errors that can occur with this type of
research. The following is a four-cornerstone model of surveying and errors that is useful
here for greater understanding (Dillman et al., 2009).
Coverage Error
A coverage error in survey research refers to the methodology used. For example, if an Internet
approach is used, only about 70% of households have Internet access, so coverage error exists
(Dillman et al., 2009). The coverage error is much smaller with telephone surveys, but the proportion of individuals with landlines is decreasing, while cell phone subscribers are increasing
(Kempf & Remington, 2007). There are laws that govern researchers’ calling respondents’ cell
phones, because that could can incur the recipient additional costs. Survey researchers need to
be cognizant of coverage error concerns when making methodological choices.
Sampling Error
A sampling error occurs when not all of the potential participants from a population are
represented in a sample, which is often due to the sampling method utilized by the researcher
(Dutka & Frankel, 1993; Futrell, 1994). Another related sampling issue is volunteerism, or
self-selection. When a study relies on volunteers, there is always a concern that volunteers
may behave differently than nonvolunteers; if this is the case, it weakens the generalizability
of the survey results.
Lan81479_08_c08_217-246.indd 227
5/22/14 2:24 PM
Analysis of Survey Data
Section 8.4
In fact, volunteers often differ from nonvolunteers in the following ways: (a) volunteers are
more educated than nonvolunteers; (b) volunteers are from a higher social class than nonvolunteers; (c) volunteers are more intelligent than nonvolunteers; (d) volunteers are more
approval motivated than nonvolunteers; and (e) volunteers are more sociable than nonvolunteers (Rosenthal & Rosnow, 1975). However, if the only way you can conduct your research is
by using volunteers, then that is what you do. But it is important to remember these caveats
when drawing conclusions from research that depends exclusively on volunteer participants.
If you only ask current iPad users about their opinion about the latest iPad version released,
these eager volunteers may respond differently than nonvolunteers/nonusers.
Measurement Error
Measurement error can occur due to a number of reasons, but they tend to fall into the category of measurement variation (the lack of a reliable instrument) and measurement bias
(asking the wrong questions or using the results inappropriately) (Dutka & Frankel, 1993).
As in any complex enterprise, the potential for mistakes can be high.
Some common measurement errors that can occur in survey research are: (1) failing to assess
the reliability of the survey; (2) ignoring the subjectivity of participant responses in survey
research; (3) asking nonspecific survey questions; (4) failing to ask enough questions to capture the behavior, opinion, or attitude of interest; (5) utilizing incorrect or incomplete data
analysis methods; and (6) drawing generalizations that are not supported by the data nor
the data analysis strategy selected (Futrell, 1994). Essentially, measurement errors address
issues of (a) did we measure what we thought we measured, and (b) did we interpret the
results appropriately?
Nonresponse Error
Nonresponse error is of particular concern in survey research (Dillman et al., 2009). As
a general rule, if the response rate is 25% or less (or a nonresponse rate of 75% or more),
the survey researcher should ask the question: Are those responding to my survey different
from those not responding to my survey (Dillman et al., 2009)? There are many approaches
for dealing with high nonresponse rates, and some of those methods involve weighting the
responses that are received, as well as following up with a subset of nonresponders and asking why they did not respond (Dale, 2006). The goal here is to determine that there was no
systematic bias in responses or nonresponses to the initial survey request. If there is no bias
(or no systematic reason driving nonresponse), then the nonresponse rate is less of a concern
to the survey researcher.
Data Handling Challenges
The details and complexity of data handling issues within survey research are beyond the
scope of this chapter, but two issues are worth mentioning. After collecting data, but prior to
analysis, there must be some data cleaning (sometimes called data editing). Although every
survey researcher must do this, there are not commonly accepted standards for data cleaning
(Leahey, Entwisle, & Einaudi, 2003). Sometimes this process involves the elimination of outliers, but other times data decisions are more complex.
Lan81479_08_c08_217-246.indd 228
5/22/14 2:24 PM
Tips for Effective Survey Item Construction
Section 8.5
Let’s say a survey included items with a Likert-type agreement scale, with 1 = strongly disagree, 2 = disagree, 3 = neutral, 4 = agree, and 5 = strongly agree. When the respondent
answered the question “I am comfortable with the undergraduate major I have selected,” the
coded answer was 55. What do you do? Do you assume they meant a 5 (strongly agree), and
change their response? Is it possible to go back and confirm what the participant meant, or
was the data collected anonymously? You could guess that a 55 meant a 5, but what about a
“23” entry—should this be a 2 (disagree) or 3 (neutral)? Here’s one more: In an online survey,
where respondents directly enter their age, a participant enters the value “1.9”—should that
be recoded as 19 years old, or should that data be deleted?
These data cleaning issues are related to how survey researchers handle missing data, which
can be done using number of complex approaches (Dale, 2006; Graham, Taylor, Olchowski, &
Cumsille, 2006; Rudas, 2005). It is best to make decisions about data cleaning before you have
the data, such as (a) discarding data that has a high probability of being incorrect, or (b) seeking
out the original, correct data when the data presented are suspect. If you cannot confirm what
a participant meant by his or her response, delete it. As you become savvier in performing data
cleaning and missing data analyses, you can alter this conservative approach. Furthermore, if
you collect your survey data anonymously, you have no method of contacting individuals to
clarify their intended response. If you must err, err on the side of caution.
Data Analytic Approaches
As alluded to earlier, the possibilities for analyzing survey data are vast and depend on many
of the same characteristics as other data analysis situations, such as the scale of measurement, the amount of data available, and the hypotheses to be tested. It would not be possible
to summarize all of the options here, as entire books are available about the subject (Fink,
1995). See Chapter 12 for a brief overview of the various data analysis options available to
researchers. Data analytic strategies can become more or less complicated, however. If your
goal is to communicate effectively with the public, you might not choose to present the results
of a repeated measures ANOVA, but you might present a table of means, or a bar graph that
clearly and succinctly communicates the story you want to tell. If you compare two nominal
scale variables, such as gender differences regarding how respondents answered a categorical survey item (“Are you married?”), then a chi-square analysis would be appropriate. Essentially, you will need the knowledge that you learn from a statistics course in order to analyze
your survey data.
8.5 Tips for Effective Survey Item Construction
Before researchers begin generating their pool of survey questions, it is beneficial to think
first about what they are trying to measure—that broad category of human response they are
trying to capture. Consider these categories: (a) attitudes, beliefs, intentions, goals, aspirations; (b) knowledge, or perceptions of knowledge; (c) cognitions; (d) emotions; (e) behaviors and practices; (f) skills, or perceptions of skills; and (g) demographics (eSurveyPro,
2009; Rattray & Jones, 2007).
Lan81479_08_c08_217-246.indd 229
5/22/14 2:24 PM
Tips for Effective Survey Item Construction
Section 8.5
Making decisions about which broad category (or categories) to inquire about has implications for the entire survey. For example, if you ask too many difficult knowledge questions,
respondents may quit your survey early, not providing you with the data you need. Actual
skills may be difficult to capture in a survey format, but you may be able to ask respondents
about their perceptions of their own skills. Demographics can be tricky as well. Ask about
too many demographics, and participants may feel a sense of intrusion, and the more questions asked, the more identifiable a participant is, even if the data are collected anonymously.
Ask about too few demographics, and you may not be able to provide tentative answers to
your hypotheses. As you have the opportunity to practice your survey skills over time, you
should become more comfortable in being able to assess these broad areas.
The following feature provides some helpful advice for constructing survey items.
Tips for Survey Item Construction
1. Avoid double-barreled items. That is, each question should contain just one thought. A
tipoff to this occurring is sometimes the use of the word “and” in a survey item.
Example to avoid: I like iPhones and Androids.
2. Avoid using double negatives.
Example to avoid: Should the supervisor not schedule an annual evaluation the same week
as the quarterly reports are due? (Answered from Strongly disagree to Strongly agree).
3. Try to avoid using implicit negatives—that is, using words like control, restrict, forbid,
ban, outlaw, restrain, or oppose.
Examples to avoid: Anonymous Internet use should be banned. All grey market sales should
be outlawed.
4. Consider offering a “no opinion” or “don’t know” option.
5. To measure intensity, consider omitting the middle alternative.
Example: Strongly disagree, disagree, neutral, agree, and strongly agree.
6. Make sure that each item is meaningful to the respondents completing the survey. That
is, are the respondents competent enough to provide meaningful responses?
Example to avoid: Windows 8 provides the best infrastructure for coding web-based programs.
7. Use simple language (standard English as appropriate) and avoid unfamiliar or difficult
words. Depending on the sample, aim for an 8th-grade reading level.
Example to avoid: How ingenuous are you when the marketing manager asks if you have
understood the material presented during the sales briefing?
8. Avoid biased questions, words, and phrases.
Example to avoid: Using clickers represents state-of-the-art survey technology. To what
extent have clickers enhanced your survey participation?
9. Ensure your own biases are not represented in your survey items, such as leading
questions.
Example to avoid: Do you think gas-guzzling SUVs are healthy for the environment?
(continued)
Lan81479_08_c08_217-246.indd 230
5/22/14 2:24 PM
Tips for Effective Survey Item Construction
Section 8.5
Tips for Survey Item Construction (continued)
10. Do not get more personal than you need to be to adequately address your hypotheses. Focus
on “need to know” items and not “nice to know” items (helps control for survey length).
11. Try to be as concrete as possible; items should be clear and free from ambiguity. Avoid
using acronyms or abbreviations that are not widely understood.
Example to avoid: The ROI from NGOs is not relevant because of the nonprofit status of
501c3s.
12. Start the survey with clear instructions, and make sure the first few questions are nonthreatening. Typically, present demographic questions at the end of the survey. If you ask
too many demographic items, respondents may be concerned that their responses are
not truly anonymous.
13. If the response scales change within a survey, include brief instructions about this so that
respondents will be more likely to notice the change.
14. If your survey is long, be sure to put the most important questions first—in a long survey,
respondents may become fatigued or bored by the end.
15. Frame questions in such a way as to minimize response set acquiescence. Ask questions
that are reverse scored (that is, strongly disagreeing is a positive outcome).
Example: This sales training seminar is a waste of time. A positive answer would be strongly
disagree.
Sources: Babbie (1973), Cardinal (2002), Converse & Presser (1986), Crawford & Christensen
(1995), Edwards & Thomas (1993), eSurveyPro (2009), Fink & Kosecoff (1985), HR-Survey
(2008), Jackson (1970), McGreevy (2008), and University of Texas at Austin (2007).
Steps in the Survey Design Process
If you are in a position to create your own survey items, understand that constructing good
survey items is difficult and takes practice. Fortunately, there is much advice available on how
to construct survey items. Although the details of the process may vary, in general these are
the steps in a typical survey design project (DeVellis, 1991):
•
•
•
Lan81479_08_c08_217-246.indd 231
Determine what you want to measure. This task may be more difficult than it originally appears. Think about how you will operationalize the business constructs you
are interested in. If you are operating from a theoretical perspective, that theory
might provide some insight into clarifying what you want to know. Generate clear,
testable hypotheses.
Once you have determined what is to be measured, generate a potential item pool,
selecting only items that will answer your specific hypotheses. If the project allows,
redundancy is good. If you can ask about your topic from a variety of perspectives,
do so; your analyses may reveal if one approach is preferable.
Decide on the survey design and survey modality, as well as response categories
(scales to be used), data analysis choices, and availability of statistical packages.
Planning on the front end of the survey process will help to yield useable results
when your data collection is complete.
5/22/14 2:24 PM
Section 8.5
Tips for Effective Survey Item Construction
•
•
•
•
Whenever possible, ask experts in the field to review your initial item pool; they may
notice subtle nuances in item wording (or identify gaps in the design of the survey).
If you are developing a survey about attitudes or opinions, consider including other
measures that would help to validate your new items, such as a social desirability scale
to help ensure your participants are answering honestly (Marlow & Crowne, 1961).
Pilot test (pretest) your sample survey.
Evaluate the item performance from the pilot test, including initial reliability estimates if possible, and optimize scale length. Ask enough survey items to measure
what you want to measure, but not so many items that potential respondents will be
unwilling to complete the survey.
One of the earliest decisions that should be made in the survey design process, even before
you start to generate survey items with potential response scales, is if response scales will be
included; that is, will survey items be open ended or closed ended?
Open-Ended Versus Closed-Ended Survey Items
Although open-ended survey items are typically more difficult for drawing conclusions, these
items can be effective depending on the type of information that you want to know. What is an
open-ended survey question? It is a survey question that requires a written or verbal response,
such as “How do you feel about parking at your workplace?” or “In your own words, what are
the three most important skills and abilities you need to succeed in our organization?”
Open-ended questions are helpful when answers may be difficult to anticipate, and the
researcher is interested in how the participant sees the world (Fink, 1995). These results can be
labor intensive to interpret, but they can provide quotable material, which may help tell a better story when presenting research at a conference or preparing a manuscript for publication.
The opposite of open-ended survey items are closed-ended survey items, which means
that the survey item is presented, and the possible responses are “closed”; that is, the possible options are presented to the participant. So the survey respondent checks a box (male
or female) or perhaps provides an answer along a continuum (strongly disagree to strongly
agree). In general, it is easier to report statistical outcomes from quantitative (closed-ended)
survey data than from qualitative (open-ended) survey data, although it is possible to treat
open-ended survey data quantitatively (for instance, in content analysis).
Table 8.1 is helpful for determining when to use a closed-ended or open-ended survey item.
Table 8.1: Decision table on when to use open-ended versus
closed-ended questions
Purpose
Lan81479_08_c08_217-246.indd 232
If yes, use OPEN.
If yes, use CLOSED.
Respondents’ own words are
essential (to please respondent,
to obtain quotes, to obtain
testimony).
You want data that are rated or
ranked (on a scale of very poor to
very good, for example), and you
have a good idea of how to order
the ratings in advance.
(continued)
5/22/14 2:24 PM
Section 8.6
Scaling Methods
Table 8.1: Decision table on when to use open-ended versus
closed-ended questions (continued)
Respondents’ Characteristics
Asking the Question
Analyzing the Results
Reporting the Results
Source: Fink (1995).
If yes, use OPEN.
If yes, use CLOSED.
Respondents are capable of
providing answers in their own
words. Respondents are willing
to provide answers in their own
words.
You want respondents to answer
using a prespecified set of
response choices.
You have the skills to analyze
respondents’ comments even
though answers may vary considerably. You can handle responses
that appear infrequently.
You prefer to count the number
of choices and responses.
You prefer to ask only the open
question because the choices are
unknown.
You prefer that responses
conform to expected response
choices.
You will provide individual or
grouped verbal responses.
You will report statistical data.
8.6 Scaling Methods
Perhaps one of the most complicated parts of survey research is deciding on the scale by
which to measure a person’s attitudes, opinions, behavior, or knowledge. Researchers rely on
best practices and established research that guides the decision making necessary to select
an appropriate scale. The following is a brief overview of the major types of scales you are
likely to use.
Dichotomous Scale
A dichotomous scale includes two possible options. If the possible options are agree/disagree,
yes/no, true/false, male/female, and so on, then you are using a dichotomous (binary) scale.
Respondents provide nominal scale data. Some examples of dichotomous scales where a yes/no
type of response would be adequate are:
•
•
•
I work at a Fortune 500 company.
I download music illegally.
I have a 401(k).
Some argue that single yes/no questions are insufficient because they are not sensitive to subtle
change over time, they dictate that individuals place themselves into large categories, and that
many phenomena are so complex that a singular yes/no response may fail to capture the complexity (Spector, 1992). As you design surveys, keep in mind that the hypotheses you wish to
test will help to inform you if a dichotomous scale can yield the type of information you seek.
Lan81479_08_c08_217-246.indd 233
5/22/14 2:24 PM
Section 8.6
Scaling Methods
Likert-Type Scales
Likert scales, or more properly Likert-type scales, may be the most famous type of scale used
by researchers today. The Likert scale is named after the psychologist from the University of
Michigan, Rensis Likert (pronounced Lick-ert). Likert’s seminal work (1932), now called a
Likert scale, called for a survey response scale to have a five-point scale, measuring from one
pole of disagreement to the other pole of agreement. Each of the scale points has a specific
verbal description (Wuensch, 2005b). A declarative statement is made, and then the respondent selects the appropriate answer. The low value is strongly disagree, and the high value is
strongly agree:
1 = strongly disagree
2 = disagree
3 = neutral (neither agree nor disagree)
4 = agree
5 = strongly agree
There have been many variations and changes suggested, which are loosely based on these
criteria, so you will often see “Likert-type” scale used rather than the very specific Likert
scale. For example, it has been argued that Likert-type variations might be better suited
because they have lesser emotional ties: 4 = completely agree, 3 = generally agree, 2 = generally disagree, and 1 = completely disagree; or 4 = completely true, 3 = mostly true, 2 = mostly
untrue, and 1 = completely untrue (Fowler, 1988). Of course, these would not conform to
the true Likert scale but would be categorized as Likert-type scales. There have been many
variations on this theme.
Table 8.2 demonstrates some of these variations. Note the varying types of response anchors
possible with a Likert-type scale approach. As you think about the type of scale you might
employ in your survey research, you should begin to appreciate how versatile a Likert-type
scale can be.
Table 8.2: Variations on the Likert-type approaches to scaling
Level of Importance
Level of Agreement
1 – Not at all important
1 – Strongly disagree
3 – Slightly important
3 – Somewhat disagree
2 – Low importance
4 – Neutral
5 – Moderately important
6 – Very important
7 – Extremely important
Lan81479_08_c08_217-246.indd 234
2 – Disagree
4 – Neither agree nor disagree
5 – Somewhat agree
6 – Agree
7 – Strongly agree
(continued)
5/22/14 2:24 PM
Section 8.6
Scaling Methods
Table 8.2: Variations on the Likert-type approaches to scaling (continued)
Knowledge of Action
Effect on X
1 – Never true
1 – No effect
3 – Sometimes but infrequently true
3 – Neutral
2 – Rarely true
4 – Neutral
5 – Sometimes true
6 – Usually true
2 – Minor effect
4 – Moderate effect
5 – Major effect
7 – Always true
Frequency
Amount of Use
1 – Never
1 – Never use
3 – Sometimes
3 – Occasionally/sometimes use
2 – Rarely
4 – Often
5 – Always
2 – Almost never use
4 – Use almost every time
Level of Difficulty
5 – Frequently use
1 – Very difficult
1 – Extremely unlikely
3 – Neutral
3 – Neutral
2 – Difficult
4 – Easy
5 – Very easy
Likelihood
2 – Unlikely
4 – Likely
Level of Satisfaction
5 – Extremely likely
1 – Completely dissatisfied
1 – Poor
3 – Somewhat dissatisfied
3 – Good
2 – Mostly dissatisfied
4 – Neither satisfied nor dissatisfied
5 – Somewhat satisfied
6 – Mostly satisfied
Level of Quality
2 – Fair
4 – Very good
5 – Excellent
7 – Completely satisfied
Source: Vagias (2006)
Lan81479_08_c08_217-246.indd 235
5/22/14 2:24 PM
Scaling Methods
Section 8.6
Semantic Differential Scale
The semantic differential scale technique, developed by Osgood in the 1950s, is a scale
that is designed to measure affect or emotion (Henerson, Morris, & Fitz-Gibbon, 1987). With
adjectives that are opposites on the polar ends of a continuum, participants are asked to
select where they “feel” they are with respect to the survey topic. For example, if you were
asked, “Thinking about this course, how do you feel about the grading policies being used?”
the semantic differential scale items presented would request that you place a checkmark on
one of the seven lines spanning between the polar opposites.
fair ___ ___ ___ ___ ___ ___ ___ unfair
unreliable ___ ___ ___ ___ ___ ___ ___ reliable
confusing ___ ___ ___ ___ ___ ___ ___ clear
helpful ___ ___ ___ ___ ___ ___ ___ not helpful
good ___ ___ ___ ___ ___ ___ ___ bad
Based on prior research, three types of findings tend to emerge from the use of semantic differential scales: an evaluative factor (good–bad), an intensity/potency factor (strong–weak),
and an activity factor (slow–fast) (Page-Bucci, 2003). The semantic differential scale is good
at capturing feelings and emotions, is relatively simple to construct, and is relatively easy for
participants to answer, but the resulting analyses can be complicated (Page-Bucci, 2003). The
following are examples of more possible pairings (Henerson et al., 1987):
angry–calm
bad–good
biased–objective
boring–interesting
closed–open
cold–warm
confusing–clear
dirty–clean
dull–lively
dull–sharp
irrelevant–relevant
last–first
not brave–brave
old–new
passive–active
purposeless–purposeful
Lan81479_08_c08_217-246.indd 236
5/22/14 2:24 PM
Scaling Methods
Section 8.6
sad–funny
slow–fast
sour–sweet
static–dynamic
superficial–profound
tense–relaxed
ugly–pretty
unfair–fair
unfriendly–friendly
unhappy–happy
unhealthy–healthy
uninformative–informative
useless–useful
weak–strong
worthless–valuable
wrong–right
Visual Analog Scales
There are many more types of scales that are used in survey research. Visual analog scales
can be used to obtain a score along a continuum, where a participant places a checkmark to
indicate where their opinion falls along the scale. The following is an example of the visual
analog scale:
No pain at all____________________________________ The worst pain I ever experienced
This would be an example of a subjective continuum scale, where a checkmark is made along
the scale to indicate how positive or negative a respondent’s opinion is about a particular
topic.
Very positive____________________________________ Very negative
With the advent of online survey packages, the visual analog scale has become digital. In the
online survey software package Qualtrics, visual analog scales are presented as “sliders,” and
respondents can click on the pointer and slide it to location along the continuum that represents their belief. See Figure 8.1 for an example of a series of slider questions.
Lan81479_08_c08_217-246.indd 237
5/22/14 2:24 PM
Scaling Methods
Section 8.6
Figure 8.1: Example of visual analog survey items using sliders
Case Study: Understanding the Differences Between Polls and
Scientific Surveys
Like a survey, a poll is designed to elicit information from respondents, and in common parlance, the two terms are often used interchangeably. But there are key differences. While a poll
must also be carefully designed, unlike a scientific survey, it uses a series of single-item questions
to ask respondents their opinions on a particular issue. There are many types of polls, but the
one most commonly used is an opinion poll. The goal of an opinion poll is to elicit information
from the public, but then to disseminate the results back to the population at large to keep them
informed. Opinion polls are used for both very serious and sometime frivolous reasons, such as
determining whether the president of the United States is doing a good job, whether Americans
trust the media, or who is America’s favorite movie star.
George Gallup developed one of the best-known polls in 1935. Gallup, Inc. conducts surveys in
160 countries using nationally representative samples and focuses on four broad categories—
Politics, the Economy, Well-Being, and the World. In the United States, the Gallup poll is conducted over the telephone and targets adults aged 18 or older. Callers are selected using proportionate, stratified random sampling. The typical sample size for each poll is 1,000 national adults.
As an example, every year in the United States, Gallup asks people to rate the honesty and ethical
standards of people in a wide variety of professions. As is the case in the vast majority of polls, a
single question with a simple rating scale is used. The rating scale consists of six choices ranging
from “Very high,” “High,” “Average,” “Low,” “Very low,” to “No opinion.” Gallup adds up the percentage of responses in both the “Very high” and “High” categories to determine which profession had
the highest ratings for honesty and ethical standards.
(continued)
Lan81479_08_c08_217-246.indd 238
5/22/14 2:24 PM
The Role of Pilot Testing
Section 8.7
Case Study: Understanding the Differences Between Polls and
Scientific Surveys (continued)
Below are some scores from the 22 professions rated in 2012:
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Nurses—85%
Medical doctors—70%
Police officers—58%
College teachers—53%
Clergy—52%
Bankers—28%
Business executives—21%
Lawyers—19%
Insurance salespeople—15%
Senators—14%
Advertising practitioners—11%
Stockbrokers—11%
Members of Congress—10%
Car salespeople—8%
•
•
•
•
•
to see if the candidate presents himself/herself professionally—65%
to see if the candidate is a good fit for the company culture—51%
to learn more about the candidate’s qualifications—45%
to see if the candidate is well rounded—35%
to look for reasons not to hire the candidate—12%
As another example, a recent poll of over 2,300 hiring managers and human resource professionals conducted by Careerbuilder.com found that 37% of companies use social media to gather
information about potential employees. One of the key questions in the poll was, “What are hiring
managers looking for on social media?” The responses were as follows:
While polls are not developed with the same purposes in mind as a scientific survey, it is clear
that they provide a great deal of useful and reliable information on a wide variety of topics and
that they can gather data in an effective and efficient manner.
Sources: Honesty/ethics in professions. Gallup. Retrieved from
http://www.gallup.com/poll/1654/honesty-ethics-professions.aspx
Thirty-seven percent of companies use social networks to research potential job candidates,
according to new CareerBuilder survey. CareerBuilder. Retrieved from
http://www.careerbuilder.com/share/aboutus/pressreleasesdetail.aspx?id=pr691&sd=4%2F
18%2F2012&ed=4%2F18%2F2099
8.7 The Role of Pilot Testing
Think of a pilot test or pretest as a dress rehearsal prior to conducting a study. It is wise to
pilot test because in measuring human behavior, elements of an experiment can go wrong if
details are not attended to. For example, in survey research, a pilot test can help to determine
if participants understand the survey questions and if the topic is being covered as expected,
as well as helping to make sure that participants understand the context of the survey question (Collins, 2003). There are typically four goals to achieve when pilot testing a survey. The
Lan81479_08_c08_217-246.indd 239
5/22/14 2:24 PM
Benefits and Limitations of Surveys
Section 8.8
survey researchers want to (a) evaluate the draft survey items; (b) optimize the length of the
scale for adequate response rate; (c) detect any weaknesses in the survey; and (d) attempt to
duplicate the conditions under which the survey will be administered. Think of the different
pilot tests that car manufacturers use with crash-test dummies prior to the introduction of a
new car model. The pilot testing procedure exists as a method of identifying weaknesses in
the planned approach so that modifications and tweaks can be made before the survey (or
car design) is finalized.
When designing survey research, researchers should ensure that respondents (a) know the
answers, (b) can recall the answers, (c) understand the questions, and (d) are comfortable
reporting the answers in the survey context (Henry, 1990). For instance, survey items should
appear at a reading level that is appropriate for the age and educational level of the participants being studied. Additionally, when collecting data with a Likert-type scale, the survey items should be declarative sentences and should not phrased in the form of a question.
By assuring participants that their data are anonymous, a researcher encourages honesty
in the participants by not linking their identity to responses about sensitive topics or illegal
behaviors. Pilot testing allows you to find most problems that may occur in your study before
conducting your study (much like proofreading an important email prior to sending it to the
management team at a company).
The following are just some quick pilot test reminders to consider before conducting survey
research (Litwin, 1995):
•
•
•
•
•
•
•
•
•
•
•
•
•
Are there any typographical errors?
Are there any misspelled words?
Does the item numbering make sense?
Is the font size big enough to be easily read (on paper; on the screen)?
Is the vocabulary appropriate for the respondents?
Is the survey too long?
Is the style of the items monotonous?
Are there easy questions mixed in with the difficult questions?
Are the skip patterns difficult to follow?
Does the survey format flow?
Are the items appropriate for the respondents?
Are the items sensitive to possible cultural barriers?
Is the survey in the best language for the respondents?
If survey research may be in your future, this list may be helpful as a checklist when preparing
the survey for pilot testing or preparing the survey for distribution to the intended sample.
8.8 Benefits and Limitations of Surveys
No research technique is perfect for all situations, and every approach will have advantages
and disadvantages, depending on the context. Thus, a survey may not always be advisable
when a business researcher wants to know more about a specific topic. Specific instances
when a survey would not be advised include (a) in a labor situation, either before or during
a strike; (b) times when there is an increased level of strife in an organization and there is
Lan81479_08_c08_217-246.indd 240
5/22/14 2:24 PM
Summary & Resources
high risk that the survey process could be mismanaged; (c) when there is a curiosity about
the topic but no commitment to act on the survey results; (d) when the survey outcomes
will be used in a deceptive manner (survey not actually administered for the stated purpose); and (e) if an intense understanding of a complex topic is desired, a survey might not
yield those results (Schiemann, 1991). As always, understanding the available literature
and gaining experience over time will help the business researcher determine if the potential rewards from conducting a survey outweigh the risks such that the survey research
project is worth doing.
Some of the limitations and risks of the survey research approach include (a) a lack of control over variables of interest; (b) a low response rate may be problematic; (c) an ambiguous
survey may lead to difficulties in interpretation; (d) in some contexts, participants may not
believe their data are truly anonymous and confidential; (e) the possibilities of bias can be
present if nonresponse rates are high or if socially desirable responding is occurring; and (f) a
survey research approach almost never allows for cause-and-effect conclusions (Fowler, 1998;
Seashore, 1987).
Surveys do have advantages though, such as allowing for anonymity of responses and statistical analysis of large amounts of data; they can also be relatively cost effective, sampling
mechanisms can be carefully controlled, and by using standardized questions, change can be
detected over time (Seashore, 1987).
Surveys are pervasive throughout culture. The ability to properly design a survey and interpret its results is a skill that well suits business researchers for a future in the workplace. But
it is important to remember that surveys are a measure of self-report and not actual behavior.
There are multiple reasons why survey data may be inaccurate; it could be that the respondents do not know the answer, they know the answer but cannot recall the answer, they do
not understand the question (but answer anyway), or they just choose not to answer (Fowler,
1998). Because most survey research does not share the same characteristics as experimental designs, it is important not to overinterpret the results of survey research—the survey
approach is powerful in helping researchers identify the relationships between variables and
differences amongst groups of people, but the results are only as good as the design quality
that is necessary for this complex task.
Summary & Resources
Summary
•
•
•
Lan81479_08_c08_217-246.indd 241
As with most concepts in business research, the complexity of the topic can range
from easy to difficult, and survey research (the design, data collection, and data
analyses) can range from simple to complex.
With the acquisition of any skills, practice over time will help to develop and internalize the expertise needed to design survey research to meet the needs of business
clients, both within your organization and for external clients.
In some ways, survey construction and design can be part art and part science, and it
seems that the outcomes of survey data are everywhere.
5/22/14 2:24 PM
Summary & Resources
•
•
Pilot testing a survey with a small number of trusted individuals can aid in the
debugging process, and it is key to remember that the survey approach is not a
one-size-fits-all approach.
As with any research approach, its strengths and weaknesses must be carefully considered prior to embarking on a major research project that commits extensive time,
resources, and expertise to make sense of the data collected.
Post-Test
1. A researcher who stops people on the street to survey them is using
a. probability sampling.
b. nonprobability sampling.
c. simple random sampling.
d. stratified sampling.
2. In which method of sampling are participants recruited by other survey participants
and therefore unlikely to represent the larger population?
a. snowball sampling
b. convenience sampling
c. quota sampling
d. cluster sampling
3. Which two survey methods can have drawbacks in terms of coverage?
a. in-person interviews and mailed surveys
b. online surveys and mailed surveys
c. in-person interviews and telephone interviews
d. online surveys and telephone interviews
4. A company emails customers a reminder to complete and send back a survey that was
mailed to their home. This is an example of
a. representativeness.
b. snowball sampling.
c. a mixed mode approach.
d. comparison of methodologies.
5. A survey that collects data at one point in time is known as a
a. time series survey design.
b. longitudinal study.
c. mixed mode survey approach.
d. cross-sectional survey.
6. A researcher would like to find out why certain changes in consumer behavior occur
over time. To do this, he studies the same group of people at several points over a few
years. This researcher is conducting a
a. cohort study.
b. panel study.
c. cross-sectional survey.
d. focus group.
Lan81479_08_c08_217-246.indd 242
5/22/14 2:24 PM
Summary & Resources
7. Asking survey questions that are not specific enough is considered a form of
measurement error.
a. true
b. false
8. Survey researchers should look for bias in respondents or nonrespondents when they
receive a response rate of
a. 25% or less.
b. 75% or less.
c. 75% or more.
d. 90% or less.
9. Surveys are useful tools for understanding respondents’ skills.
a. true
b. false
10. Which of the following is an example of a reverse-scored survey item?
a. The training was helpful to me.
b. The information session was boring.
c. The product provides a good value for the money.
d. I have NOT benefitted from the company’s service.
11. A survey provides a statement and then asks respondents whether they completely
disagree, generally disagree, generally agree, or completely agree with that statement.
This is an example of a(n)
a. Likert scale.
b. dichotomous scale.
c. Likert-type scale.
d. semantic differential scale.
12. Which type of scale would you choose if you wanted to measure emotions?
a. Likert scale
b. dichotomous scale
c. Likert-type scale
d. semantic differential scale
13. In pilot testing, researchers should check that their survey includes an appropriate
vocabulary level, along with an adequate font size.
a. true
b. false
14. Which of the following is a reason pilot tests are important in survey research?
a. Pilot tests show whether participants understand the survey questions, which
can keep an experiment from failing.
b. Pilot tests provide data to compare to the survey post-tests, allowing researchers
to measure the effectiveness of the intervention.
c. Pilot tests allow for triangulation, helping to eliminate researcher bias.
d. Pilot tests are necessary to protect the anonymity of participants, which makes
them more likely to answer the survey questions honestly.
Lan81479_08_c08_217-246.indd 243
5/22/14 2:24 PM
Summary & Resources
15. One advantage of surveys, compared to other research methods, is the greater control
they offer over variables of interest.
a. true
b. false
16. How can researchers successfully use the survey method to detect change over time?
a. by using standardized questions
b. by statistically analyzing large amounts of data
c. by carefully controlling sampling mechanisms
d. by drawing cause-and-effect conclusions
Answers
1. b. nonprobability sampling. The correct answer can be found in Section 8.1.
2. a. snowball sampling. The correct answer can be found in Section 8.1.
3. d. online surveys and telephone interviews. The correct answer can be found in
Section 8.2.
4. c. a mixed mode approach. The correct answer can be found in Section 8.2.
5. d. cross-sectional survey. The correct answer can be found in Section 8.3.
6. b. panel study. The correct answer can be found in Section 8.3.
7. a. True. The correct answer can be found in Section 8.4.
8. a. 25% or less. The correct answer can be found in Section 8.4.
9. b. False. The correct answer can be found in Section 8.5.
10. b. The information session was boring. The correct answer can be found in
Section 8.5.
11. c. Likert-type scale. The correct answer can be found in Section 8.6.
12. d. semantic differential scale. The correct answer can be found in Section 8.6.
13. a. True. The correct answer can be found in Section 8.7.
14. a. Pilot tests show whether participants understand the survey questions, which can
keep an experiment from failing. The correct answer can be found in Section 8.7.
15. b. False. The correct answer can be found in Section 8.8.
16. a. by using standardized questions. The correct answer can be found in Section 8.8.
Questions for Critical Thinking
1. Throughout your life you have likely been a participant in at least one survey
research study, whether it be a telephone survey, a mail survey, someone approaching you at the mall and wanting to ask you some questions, or an Internet survey
that pops up as you enter a website. In considering these different survey modalities,
does the way in which you are asked the questions affect whether or not you will
answer them? Which survey modality would you prefer? Do you think there may be
an age difference regarding survey mode? Explain.
2. Sometimes errors are obvious, and sometimes they are quite difficult to find. In
looking back on the different types of errors possible in survey research, generate
an example from your own experience that demonstrates coverage error, sampling
error, measurement error, and nonresponse error. Describe the experience in detail,
pointing out which error occurred on which occasion.
Lan81479_08_c08_217-246.indd 244
5/22/14 2:24 PM
Summary & Resources
3. Have you ever attempted to answer a survey item, but were unable to because the scale
didn’t make sense or there was an error? In thinking about recent surveys that you may
have completed, what scales were used, and what types of data analytic approaches
were linked to the scale selected? For instance, if you were answering a survey item on
a Likert-type agreement scale where 1 = strongly disagree and 5 = strongly agree, what
type of statistical analysis might you use (depending on the other variable)? If you were
going to present this data to a client, what type of chart might you use, and why?
Key Terms
closed-ended survey items A type of
survey question where all possible answers
are provided, and the participant selects the
items closest to their own beliefs.
cluster sampling The sampling practice of
“clustering” groups of a population instead
of evaluating each individual person in order
to gain information when it is impossible or
impractical to compile an exhaustive list of
members comprising the target population.
cohort study A study design in which new
samples of individuals are followed over
time.
coverage The issue of who has Internet
access and who does not that provides a
barrier to obtaining information through
online surveys; similar issue involved with
telephone surveys.
coverage error An error regarding the
methodology used including access to
Internet, use of telephones, and other
methodologies.
cross-sectional survey design A study
design where data collection occurs at a single
point in time with the population of interest.
data cleaning A method of reviewing data
entry to ensure that it has been handled and
entered accurately.
demographics Variables used to identify
the traits of a study population.
Lan81479_08_c08_217-246.indd 245
dichotomous scale A scale in which there
are only two possible responses.
focus group A group of people who are
interviewed at the same time for the purpose of conducting a survey.
in-person interviews A research methodology that allows an interviewer and a
participant to build rapport through conversation and eye contact, which might allow
for deeper questions to be asked about the
topic of interest.
longitudinal survey A study design where
data collection occurs at several different
points over an extended period of time.
mixed mode approach A study design
where multiple research modalities are
accessed to achieve the research goals.
multistage sampling The two-stage sampling practice involving the formation of
clusters as a primary selection, then sampling members from the selected clusters to
produce a final sample.
nonprobability sampling The sampling
practice where the probability of each participant being selected for a study is unknown
and sampling error cannot be estimated.
nonresponse error An error occurring
when there is a response rate of 25% or less
for a particular question.
open-ended survey items A type of survey
item that is answered in words or phrases.
5/22/14 2:24 PM
Summary & Resources
panel study A study design in which the
same people are studied over time, spanning
at least two points in time.
pilot test A “practice run” of a questionnaire used to determine weaknesses and
optimize the length of the scale for adequate
response rate. The conditions in which the
survey will be administered are typically
replicated as close as possible to the actual
survey administration.
probability sampling The sampling practice where the probability of each participant being selected for a study is known and
sampling error can be estimated.
quota sampling The sampling practice
where a researcher identifies a target population of interest and then recruits individuals (nonrandomly) of that population to
participate in a study.
representativeness The assumption that
a sample will resemble all qualities of the
general population in order to ensure that
results of a sample can be applied to the
whole general population.
sampling error An error occurring when
all potential participants from a population
may not be represented in a sample.
scale A tool used to measure a person’s
attitudes, perceptions, behaviors, etc., that is
chosen to best represent a study.
Additional Resources
•
•
•
Lan81479_08_c08_217-246.indd 246
semantic differential scale A survey
response scale used to measure affect and
emotion using dichotomous pairs of words
and phrases that a participant evaluates on a
scale of one to seven.
simple random sample The practice of the
purest form of sampling, and also one of the
rarest techniques used, where everybody in
the survey population has the same probability of being tested.
snowball sample The sampling practice
where members of the target population of
interest are asked to recruit other members of the same population to participate
in the study.
stratified sample The practice of dividing a sample into subcategories (strata) in
a way that identifies existing subgroups
in a general population in order to make a
sample the same proportion as displayed in
a population.
systematic random sample The sampling
practice in which every nth person from a
sample is selected.
visual analog scale A survey response
scale used to obtain a score along a continuum, where a participant places a checkmark to indicate where his or her opinion
falls along the scale.
volunteer sample The common sampling
practice where volunteers are asked to participate in a survey.
A journal article in which the authors address key issues in international survey
research.
http://www.harzing.com/intresearch_keyissues.htm
A brief review of the different types of survey methods available for small to medium
businesses.
http://www.surveybuilder.info/various-survey-methods-for-small-and-medium
-businesses/
A comprehensive overview of a survey system utilized in business.
http://www.surveysystem.com/sdesign.htm
5/22/14 2:24 PM
Secondary Data Analysis
11
D-BASE/Photodisc/Getty Images
Learning Objectives
After reading this chapter, you should be able to:
• Explain what secondary data are and identify their sources.
• Determine when to use different data analysis procedures and demonstrate awareness about
determining the feasibility of secondary data analysis studies.
• Articulate the advantages and the disadvantages of using secondary data.
283
Lan81479_11_c11_283-294.indd 283
5/22/14 2:26 PM
Secondary Data
Section 11.1
Pre-Test
1. Inventory management, sustainability, demand management, and market trends are all
business topics that have been studied using secondary data.
a. true
b. false
2. Geographic Information Systems use assumptions from various cases to emulate and
predict real-world behaviors.
a. true
b. false
3. One advantage of using secondary data is that a data set can contain thousands of
variables of interest.
a. true
b. false
Answers
1. a. true. The correct answer can be found in Section 11.1.
2. b. false. The correct answer can be found in Section 11.2.
3. a. true. The correct answer can be found in Section 11.3.
Introduction
There is clearly a “green” movement alive these days, with an emphasis on reusing and recycling
items so that there is less pollution, a reclamation of useable goods and key minerals, and so on.
Recycling takes many forms, and it even appears in this chapter to an extent. Those researchers who employ secondary data analysis techniques are essentially recycling available data
for new purposes. Sometimes the recycled data is a great match and fit for the secondary data
analysis project, but this approach is not without its disadvantages and drawbacks. This chapter
provides a brief overview of this approach and its key components.
11.1 Secondary Data
Essentially, primary data are collected when a researcher does an empirical research
study and generates new data to be analyzed and interpreted. In this type of research, the
researcher or team designs, collects, and analyzes the original data. Secondary data are data
that are already in existence and are reused (think “recycled”). That is, secondary data are not
directly compiled by the researcher conducting the analysis (Koziol & Arthur, 2011; Tasic &
Feruh, 2012). For example, the demographic and economic data gathered and published by a
government (such as U.S. census data) become secondary data to researchers outside of the
agency that specifically gathered the data. Research with secondary sources is sometimes
known as desk research (Crawford, 1997).
Lan81479_11_c11_283-294.indd 284
5/22/14 2:26 PM
Secondary Data
Section 11.1
Secondary data analysis techniques have broad applications throughout business. In an
analysis of research articles published in the Journal of Business Logistics for 2009–2010, the
variety of business topics studied using secondary data is remarkable (Rabinovich & Cheon,
2011). These topics include transportation networks, market trends, inventory management,
competitive dynamics, fulfillment and distribution, transportation safety, costs and financial
performance, sustainability, risk and disaster management, forecasting, human resources,
information technology, demand management, and buyer–supplier relationships.
For example, Hofer, Eroglu, and Hofer (2012) examined the role of inventory leanness on financial performance in part using secondary data analysis. Inventory leanness uses an approach
similar to just-in-time practices to minimize waste, with the goals being lower inventory, better
quality, and shorter production times. To help ascertain the effects of inventory leanness, Hofer
et al. (2012) used company-level finance and inventory data from a Standard & Poor database,
which allowed access to data from 1,421 firms in the United States. These data, combined with
other approaches in the study, allowed the authors to conclude that inventory leanness does
contribute positively toward overall company performance. Although it’s a complex approach,
secondary data analysis can yield important insights not easily gained otherwise.
Most researchers who work in the field of secondary data analysis classify this type of research
into two subcategories: internal data (collected by the organization for other reasons) and
external data (data collected outside of an organization) (Dobson, 2014). Although this
advice may differ depending on the source, it is recommended that internal sources be considered first because internal data are proprietary and are exclusive to the business, so rival
businesses will not have access to the information (Riley, 2012).
Internal Sources
There can be numerous sources of data found within a business, including customer data,
information about competitors, and industry-wide data. Examples of internal data sources
include sales data (invoices, quarterly sales reports), customer loyalty cards, financial data
(such as accounts receivable reports), transport data, reports of complaints and critical incidents, storage data, and advertising data (Crawford, 1997; Dobson, 2014). Even the success
of past advertising campaigns can be compared with sales invoices and become a source of
secondary data (Riley, 2012).
One of the main disadvantages of using internal data is that it will only reveal trends with current customers, and it does not provide much insight into industry-wide trends (Dobson, 2014).
External Sources
Commonly cited sources of external data include other companies, the press, academic
researchers, private sources, library sources, syndicated services, general business publications, statistics from the federal government and regulatory bodies, trade associations,
commercial data services, statistics agencies, and national and multinational organizations
(Cowton, 1998; Crawford, 1997; Steppingstones Partnership, 2014). Secondary data can also
be found in college/university records, journal supplements, and author websites (Koziol
& Arthur, 2011); an exhaustive listing of possible external sources is just not possible. The
advantages of external sources of data are that they may be cheaper to procure than original
Lan81479_11_c11_283-294.indd 285
5/22/14 2:26 PM
Secondary Data
Section 11.1
primary data, and they may allow for perspectives not readily available within an organization. In a similar vein, these can be the drawbacks to external data, as they may not be individually suited or applicable to a particular organization.
To get some ideas, the following are possible locations to find secondary data sets (Koziol &
Arthur, 2011):
•
•
•
•
Inter-university Consortium for Political and Social Research—it provides data and
educational opportunities (http://www.icpsr.umich.edu/icpsrweb/landing.jsp);
the U.S. government’s open data warehouse (http://www.data.gov);
U.S. Census Bureau website, providing measures of America’s people, places, and
economy (http://www.census.gov); and
a Penn State University resource that is a simple online data archive for population
studies (http://sodapop.pop.psu.edu/data-collections).
After a company has determined that the examination of secondary data (either internally or
externally sourced) is a good idea, research questions and data analysis options are the next
key considerations.
Case Study: Mining Data to Support a Site Location Decision
Profile: Derrick Van Mell, Van Mell Associates
Derrick Van Mell, a consultant in strategic planning applied
to facilities management, conducted site location research
for a successful neighborhood center in Madison, Wisconsin. He combined data sources including demographics,
transportation, and land use to identify the best option to
serve a diverse constituency of visitors, employees, and
funders of the facility.
The Goodman Community Center has undergone several
changes in organization, funding, and names since it was
founded in 1948. Originally conceived as a youth activity
center, its activities expanded over the years to serve area
residents of all ages with a variety of educational, recreational, and social programs. The center
is located in a remarkably diverse 100-year-old neighborhood characterized by a mix of housing,
shopping, and jobs, where wealthy and low-income families live in close proximity.
© Marlene Ford/Spaces Images/Corbis
Atwood Community Center, as it was once known, expanded from one building to four along
a busy commercial street as its programs expanded. At the beginning of the search for a new,
consolidated facility, center director Becky Steinhoff and the board of directors hired Van Mell
to research the Kupfer Ironworks, a century-old building located less than half a mile from the
center’s existing facilities.
“Sophisticated mapping can depict the intricate forces of the market, labor pool, and transportation web,” wrote Van Mell in Buildings Matter (2005). Van Mell drew on the technique of Graphic
Information System analysis, or GIS, for this project, merging cartography and statistical analysis
to produce maps designed to aid decision making.
(continued)
Lan81479_11_c11_283-294.indd 286
5/22/14 2:26 PM
Secondary Data
Section 11.1
Case Study: Mining Data to Support a Site Location Decision
(continued)
Van Mell began by consulting demographic data to understand the population served. “The community center has many audiences, from the youngest to the oldest in our community. Each audience has different patterns of using the center, not just for different services but at different times
of day,” Van Mell said. Would relocating to the factory, located in a residential area along the rail
line between the area’s two major traffic arteries, discourage current members from traveling to
the center, or make it more available to a larger population?
“We created a plot of the current center participants, using color coding to indicate children,
seniors, and everybody in between,” said Van Mell. “We used address-specific information, and
then added general census information by census tract.�...
Purchase answer to see full
attachment