Human Resource Management - Personnel Selection

timer Asked: May 14th, 2019
account_balance_wallet $80

Question Description

Please answer both of the first two questions and any three of the remaining questions. This exam is worth 75 points – 15 points per question.

Please note that there is a happy medium that can be reached in terms of appropriate response length. That is, be succinct but thorough. As with assignments, for this final you should a) make sure you answer each part of the question for which you are responding, b) provide definitions of terms within the questions, and c) provide examples where appropriate. Also, please note that you will lose points if you do not cite appropriate references for statements you make – even if what you say is true. Include page numbers when direct quotes are used, and please use multiple sources – there were plenty of readings other than the textbook, and you should draw upon these in your answers. Do not forget to indicate which questions you are answering in the second part of the final (it is not always obvious).

Both of these first two questions are required:

Question #1: Imagine you are called upon to teach a one-hour seminar to managers on selection. In this one-hour time block, you are supposed to draw on one major topic as it relates to selection, and its importance for managers. What area would you choose? Why is this area central to selection? Don’t forget to provide ample support for your choice.

Questions #2: A recurring theme this semester has been with “fit”. Please answer each of the sub-points below:

a) ­­­ At what level (job, group, etc) and on what characteristics (e.g., culture, personality, etc.) should we assess fit? (5 points)

b) How do/should we measure this fit? (5 points)

c) Is selecting on “fit” good for the organization and individuals? (5 points)

Choose any three of the following questions:

Question #3: What are the advantages of conveying a realistic recruitment message as opposed to portraying the job in a way that the organization thinks that job applicants want to hear? In this answer, you will want to describe Realistic Job Preveiws, why they are proposed to work, and what research has told us about their effectiveness.

Question #4: Legal issues have emerged throughout the course. Based on these readings: What is the EEOC and what is its function? What is adverse impact and how would one typically determine if this is a problem? What is Affirmative Action? What is the purpose of Affirmative Action? Who is involved?

Question #5: Define criterion (predictive and concurrent), content, and construct validity. Your definition should cover how each type of validity is assessed. What does each type of validity tell you and for what should each type of validity be used?

Question #6: In what ways are the following three initial assessment methods similar an in what ways are they different: application blanks, biographical information, and reference reports?

Question #7: Sometimes in papers, you will see statements such as, “tests of cognitive ability have been shown to be more predictive of job performance than were interviews.” However, this statement seems to confuse “methods” and “constructs”. What are “methods” and “constructs”? Why is it important to make the distinction between methods and constructs? Illustrate this distinction with a specific example from the literature.

Unformatted Attachment Preview

Journal of Applied Psychology 2003, Vol. 88, No. 4, 635– 646 Copyright 2003 by the American Psychological Association, Inc. 0021-9010/03/$12.00 DOI: 10.1037/0021-9010.88.4.635 A Meta-Analysis of Job Analysis Reliability Erich C. Dierdorff and Mark A. Wilson North Carolina State University Average levels of interrater and intrarater reliability for job analysis data were investigated using meta-analysis. Forty-six studies and 299 estimates of reliability were cumulated. Data were categorized by specificity (generalized work activity or task data), source (incumbents, analysts, or technical experts), and descriptive scale (frequency, importance, difficulty, time-spent, and the Position Analysis Questionnaire). Task data initially produced higher estimates of interrater reliability than generalized work activity data and lower estimates of intrarater reliability. When estimates were corrected for scale length and number of raters by using the Spearman-Brown formula, task data had higher interrater and intrarater reliabilities. Incumbents displayed the lowest reliabilities. Scales of frequency and importance were the most reliable. Implications of these reliability levels for job analysis practice are discussed. some showing significant evidence of moderation, and others displaying none. It is interesting to note that only recently have the definitions of reliable and valid job information received directed attention and discourse (Harvey & Wilson, 2000; Morgeson & Campion, 2000; Sanchez & Levine, 2000). More recent research has tended to frame the quality of job analysis data through views ranging from various validity issues (Pine, 1995; Sanchez & Levine, 1994), to potential social and cognitive sources of inaccuracy (Henry & Morris, 2000; Morgeson & Campion, 1997), to the merits of job analysis and consequential validity (Sanchez & Levine, 2000), and to an integrative approach emphasizing both reliability and validity examinations (Harvey & Wilson, 2000; Wilson, 1997). As an important component of data quality, we sought to specifically examine the role of reliability in relation to job analysis data quality. Since mandating the legal requirements for the use of job analyses (Uniform Guidelines for Employee Selection Procedures, 1978), the importance of obtaining job analysis data and assessing the reliability of such data has become a salient issue to both practitioners and researchers. It has been estimated that large organizations spend between $150,000 and $4,000,000 annually on job analyses (Levine, Sistrunk, McNutt, & Gael, 1988). Furthermore, it appears probable that job analysis data will continue to undergo increasing legal scrutiny regarding issues of quality, similar to the job-relatedness of performance appraisal data, which have already seen a barrage of court decisions during the past decade (Gutman, 1993; Werner & Bolino, 1997). Considering the widespread utility implications, legal issues, and organizational costs associated with conducting a job analysis, it would seem safe to assume that the determination of the general expected level of job analysis data reliability should be of primary importance to any user of this type of work information. Prior literature has lamented the paucity of systematic research investigating reliability issues in job analysis (Harvey, 1991; Harvey & Wilson, 2000; Morgeson & Campion, 1997; Ployhart, Schmitt, & Rogg, 2000). Most research delving into the reliability and validity of job analysis has been in a search for moderating variables of individual characteristics, such as demographic variables like sex, race, or tenure (e.g., Borman, Dorsey, & Ackerman, 1992; Landy & Vasey, 1991; Richmann & Quinones, 1996) or other variables like performance and cognitive ability (e.g., Aamodt, Kimbrough, Keller, & Crawford, 1982; Harvey, Friedman, Hakel, & Cornelius, 1988; Henry & Morris, 2000). The overall conclusions of these research veins have been mixed, with Purpose The principal purpose of this study was to provide insight into the average levels of reliability that one could expect of job analysis data. Coinciding with this purpose were more specific examinations of the reliability expectations given different data specificity, various sources of data, variety of descriptive scales, and techniques of reliability estimation. The hope embedded in estimating average levels of reliability was that these data may in turn inspire greater attention to the reliability of job analysis data, as well as be used as reference points when examining the reliability of such data. We feel that not enough empirical attention has been paid to this issue, and that the availability of such reliability reference points could be of particular importance to practitioners conducting job analyses. To date, no such estimates have been available, and practitioners have had no means of comparison with which to associate the reliability levels they may have obtained. Moreover, elucidation of the levels of reliability across varying data specificity, data sources, and descriptive scales would provide useful information regarding decisions surrounding the method, sample, format, and overall design of a job analysis project. Erich C. Dierdorff and Mark A. Wilson, Department of Psychology, North Carolina State University. This research was conducted in partial fulfillment of the requirements for the master’s degree at the North Carolina State University by Erich C. Dierdorff. Correspondence concerning this article should be addressed to Mark A. Wilson, Department of Psychology, North Carolina State University, P.O. Box 7801, Raleigh, North Carolina 27695-7801. E-mail: 635 636 DIERDORFF AND WILSON Scope and Classifications Work information may range from attributes of the work performed to the required attributes of the workers themselves. Unfortunately, this common collective conception of work information (job-oriented vs. worker-oriented) can confound two distinctive realms of data. Historically, Dunnette (1976) described these realms as “two worlds of human behavioral taxonomies” (p. 477). Dunnette’s two worlds referred to the activities required by the job and the characteristics of the worker deemed necessary for successful performance of the job. More recently, Harvey and Wilson (2000) contrasted “job analysis” versus “job specification,” with the former collecting data about work activities and the latter collecting data describing worker attributes presumably required for job performance. The present study focused only on reliability evidence obtained through data that described the activities performed within a given work role (i.e., job analysis). This parameter allowed the study’s investigations to examine the reliability of data that carry the feasibility of verification through observation, as opposed to latent worker attributes typically described by job specification data. The primary classification employed by the present study delineated job analysis data by two categories of specificity: task and general work activity (GWA). These classifications were not meant to be all-inclusive but rather were meant to capture the majority of job analysis data. Task-level data were defined as information that targets the more microdata specificity (e.g., “cleans teeth using a water-pick” or “recommends medication treatment schedule to patient”). In contrast, GWA-level data were defined similarly to the description offered by Cunningham, Drewes, and Powell (1995), portraying GWAs as “generic descriptors,” including general activity statements applicable across a range of jobs and occupations (e.g., “estimating quantity” or “supervising the work of others”). An important caveat to data inclusion was that only GWAs relating to the work performed within a job were used, thus excluding what have been referred to as “generalized worker requirements” such as knowledge, skills, and abilities (KSAs; McCormick, Jeanneret, & Mecham, 1972). By separately coding tasks and GWAs, the present study allowed an investigation of job-analysis reliability relative to the specificity domain of the collected data. Prior literature has suggested that job-analysis data specificity may affect the reliability of such data (Harvey & Wilson, 2000; K. F. Murphy & Wilson, 1997), with more specific data showing higher reliability levels. Moreover, with the increasing prevalence of “competency” modeling, which in part incorporates more general levels of behavioral information (Schippmann, 1999), as well as the recent push to more generic activities for purposes of job and occupation analysis (Cunningham, 1996), the separate examination of GWAs allowed for interpretative comparisons with increasingly prevalent and contemporary job and occupation analysis approaches. In addition to data specificity, the present study incorporated a classification for the source from which the data were generated. Sources of job-analysis information were classified into three groupings: (1) incumbents, (2) analysts, and (3) technical experts. Incumbent sources referred to job information derived from jobholders. These data were usually collected through self-report surveys and inventories. Analyst derived job information was from nonjobholder professional job analysts. These data were generally gathered through methods such as observation and interviewing and were then used to complete a formal job-analysis instrument (i.e., Position Analysis Questionnaire [PAQ]). The third source group, technical experts, captured data obtained through individuals defined specifically as training specialists, supervisors, or higher level managers (Landy & Vasey, 1991). Because many technical experts can also be considered job incumbents, this designation was reserved only for data that were explicitly described as being collected from technical experts, supervisors, or some other “senior level” source. By source-coding reliability evidence, analyses could reveal any changes in the magnitude of reliability estimates in relation to these common sources of jobanalysis data. Prior empirical investigation has suggested the possibility of differential levels of reliability across various classifications of respondents (Green & Stutzman, 1986; Henry & Morris, 2000; Landy & Vasey, 1991), such as performance level of the incumbent and various demographic characteristics of subject matter experts. The present research sought to compare the reliability levels across sources rather than only within a given source as in previous research. A third classification was used to categorize the type of descriptive scale upon which a job was analyzed. Some common examples of descriptive scales are time spent on task, task importance, and task difficulty (Gael, 1983). Past research has suggested that the variety of scales used in job analysis yield different average reliability coefficients (Birt, 1968). For instance, scales of frequency of task performance and task duration have displayed reliabilities ranging from the .50s to the .70s (McCormick & Ammerman, 1960; Morsh, 1964). Difficulty scales have generally been found to have lower reliabilities than other descriptive scales, with estimates ranging from the .30s to the .50s (McCormick & Ammerman, 1960; McCormick & Tombrink, 1960; Wilson, Harvey, & Macy, 1990). Thus, data were coded for the commonly used descriptive scales of frequency, importance, difficulty, and time spent. GWA data derived from the PAQ (McCormick, Jeanneret, & Mecham, 1972), which is arguably the most widely used and researched generic job analysis instrument, were additionally coded. To allow for a comparative analysis of reliability across the three aforementioned classifications, it was necessary to group coefficients into appropriate estimation categories. Therefore, reliability estimates were delineated by their computational approach. Two approaches commonly used in job analyses to estimate reliability were chosen as the categories employed by this study. Both types of reliability estimation are discussed in the ensuing section. Types of Reliability Estimates Used in Job Analysis The two most commonly used forms of reliability estimation are interrater and intrarater reliability (Viswesvaran, Ones, & Schmidt, 1996). In the context of job analysis practice, interrater reliability seems to be the more prevalent of the two techniques. Interrater reliability identifies the degree to which different raters (i.e., incumbents) agree on the components of a target work role or job. Interrater reliability estimations are essentially indices of rater covariation. This type of estimate can portray the overall level of consistency among the sample raters involved in the job analysis effort. Typically, interrater reliability is assessed using either Pear- META-ANALYSIS OF JOB ANALYSIS RELIABILITY son correlations or intraclass correlations (ICC; see Shrout & Fleiss, 1979, for a detailed discussion). Most previous empirical literature has focused on the intrarater reliability of job analysis data. Two forms of intrarater reliability commonly employed within job analysis are repeated item and rate–rerate of the same job at different points in time. Both of these estimates may be viewed as coefficients of stability (Viswesvaran et al., 1996). The repeated items approach can display the consistency of a rater across a particular job analysis instrument (i.e., task inventory), whereas the rate–rerate technique assesses the extent to which there is consistency across two administrations. Intrarater reliability is typically assessed using Pearson correlations. Research Questions We examined reliability from previously conducted job analyses by using the four aforementioned classifications. To explore this purpose, we used meta-analytic procedures. The purpose of these meta-analyses was to estimate the average reliability that one could expect when gathering work information through a job analysis at different data specificities from different sources and when using various descriptive scales. In short, we sought to investigate the following questions: What are the mean estimates of reliability for job analysis information, and how do these estimates differ in magnitude across data specificity, data source, and descriptive scale? Are the levels of interrater reliability higher or lower than levels of intrarater reliability? Finally, does the source of the job analysis information or the choice of descriptive scale affect the magnitudes of reliability estimates? Method Database We conducted a literature search using standard and supplementary techniques in an attempt to lessen the effect of the “file drawer” problem— the increased probability of positive findings in published literature (Rosenthal, 1979). In the case of job analysis research, this could result in unrealistically high estimations of reliability. In addition, many empirical studies about or using job analysis data only report reliability estimations as side bars to the main topic, thus making it more difficult to locate these sources of reliability data. Using the standard technique, we used the Internet and other computer-based resources. Some examples of these sources were PsycINFO, PsychLit, job analysis–related Web sites and listserves, the National Technical Information Services database, as well as other online and offline library databases. Within these sources, we used keyword searches with terms such as “job analysis, job analysis accuracy, job analysis reliability, work analysis, and job information accuracy.” The majority of reliability data that we used in this study were gathered with this method. The supplementary technique, meant to expand the breadth of the literature search, used both ancestry and descendency approaches (Cooper, 1984), as well as correspondence with researchers in the field of job analysis. The supplementary approach produced a substantial amount of reliability data in the form of technical reports and unpublished job analyses. Table 1 displays descriptive statistics of the included studies. Analyses To be included in the meta-analyses, studies were first required to describe the approach used to assess reliability of the job data. Those that did not assess reliability according to the aforementioned estimation types were excluded. Second, the sample size used in the reliability estimation 637 Table 1 Descriptive Summary of Collected Data Data category No. of studies No. of reliability estimates Interrater reliability Specificity Task GWA Source Incumbent Analyst Technical expert Scale Frequency Importance Difficulty Time spent PAQ Intrarater reliability Specificity Task GWA Source Incumbent Analyst Technical expert Scale Frequency Importance Difficulty Time spent Publication type Journal Technical report Book Dissertation 31 214 16 15 119 95 16 100 10 9 8 10 3 6 8 10 11 10 23 83 10 5 49 36 12 4 2 42 31 6 5 4 5 6 13 4 7 10 26 10 1 2 205 87 3 4 Note. GWA ⫽ generalized work activity; PAQ ⫽ Position Analysis Questionnaire. was required. Third, studies were required to assess the requirements of the job itself, not merely attributes of the workers. Once the pool of studies was assembled, we coded the data for the purposes of a comparative analysis. Coding allowed for us to conduct separate meta-analyses within each of the study’s classifications, hence making the average correlation generated within each grouping more empirically justified. Two raters independently coded the gathered studies according to the four aforementioned classifications. Interrater agreement of study coding was 98%. Disagreements were resolved through discussion, and no additional exclusions were necessary. We conducted a meta-analysis correcting only for sampling error for each of the distributions gleaned from the study’s classifications. When cumulating reliability across several past empirical studies, it may be necessary to determine whether a need to adjust results from various studies to a common length of items or number of raters (interrater reliability) or to a common time interval (intrarater reliability) is required. Two available options were to use the Spearman-Brown formula to bring all estimates to a common length or to use previous research investigating the functional relationship between time intervals and job analysis reliability. The present study conducted meta-analyses both with and without the Spearman-Brown corrections of individual reliability estimates. Without evidence of the functional relationship affecting intrarater reliability of job analysis data, the only statement able to be proffered is that as the time interval increases, reliability generally decreases (Viswesvaran et al., 1996). Thus, no meta-analytic corrections were made to bring estimates of 638 DIERDORFF AND WILSON intrarater reliability to a common time interval. However, intrarater reliability in job analysis can be derived from either a rate–rerate or a repeated item approach. Therefore, to display the potential effects of time and allow comparison between these two common forms of intrarater reliability, separate meta-analyses for repeated item and rate–rerate reliabilities were conducted. The mean time interval for rate–rerate data from the gathered studies was 6.5 weeks and had a range of 1–16 weeks. Rate–rerate data comprised 84% of the collected intrarater reliability data and repeated item data made up the remaining 16%. As for a common length of items or number of raters, the body of literature on job analysis procedures does not concede a particular recommendation. Suggestions for item length and number of raters varies depending on the organization, project purposes, and the practical limitations of the project (Levine & Cunningham, 1999 ...
Purchase answer to see full attachment

Tutor Answer

School: New York University



Personnel selection

Name of the student


Name of the Professor

Submission Date




Compulsory questions
Question 1
The area that I would choose is recruiting and labor markets. This topic is central to selection
because Managers will be able to learn the best approach of generating a pool of applicants
what well-suits the organizational jobs. They will also get to know what is required of them to
select a quality work force from the labor force population.
Question 2
a) Fit should be accessed at job level. This is because doing this will enable an employer to
realize whether the connection between the strengths, experience and needs of an employee
match with the necessities of the job and the work environment as well. On the other hand, fit
should be accessed in relation to cultural characteristics; doing this will help an employer to
find out whether an employee will be in a position to work perfectly with the culture of the
organization. In addition, the employer will also find out whether the culture of the organization
matches what an employee requires to succeed in the organization’s work environment
(Tillmann, 2016)
b) Fit should be measured using various methods such as; Interviews and job fit instruments
that are administered in form of surveys and self-report questionnaires. In addition,
technological advan...

flag Report DMCA

Tutor went the extra mile to help me with this essay. Citations were a bit shaky but I appreciated how well he handled APA styles and how ok he was to change them even though I didnt specify. Got a B+ which is believable and acceptable.

Brown University

1271 Tutors

California Institute of Technology

2131 Tutors

Carnegie Mellon University

982 Tutors

Columbia University

1256 Tutors

Dartmouth University

2113 Tutors

Emory University

2279 Tutors

Harvard University

599 Tutors

Massachusetts Institute of Technology

2319 Tutors

New York University

1645 Tutors

Notre Dam University

1911 Tutors

Oklahoma University

2122 Tutors

Pennsylvania State University

932 Tutors

Princeton University

1211 Tutors

Stanford University

983 Tutors

University of California

1282 Tutors

Oxford University

123 Tutors

Yale University

2325 Tutors