Job Analysis

Anonymous
timer Asked: Feb 9th, 2019
account_balance_wallet $20

Question Description

Readings:

Department of Labor (1998). O*NET version 1.0. http://online.onetcenter.org/

Heneman, H. H., Judge, T. A., Kammeyer-Mueller, J. (2014). Staffing Organizations (8th ed.). New York: McGraw-Hill, Chapter 4.

Sackett, P.R. & Laczo, R.M. (2003). Job and Work Analysis. In W. C. Borman, D. R. Ilgen, R. J. Klimoski, & I. B. Weiner (Eds.),Handbook of Psychology (pp. 21-37). Hoboken, NJ: John Wiley & Sons, Inc.

**Jeanneret, P. & Strong, M. (2003). Linking O*NET job analysis information to job requirement predictors: An O*NET application.Personnel Psychology, 56, 465-492.

**Dierdorff, E.C. & Wilson, M.A. (2003). A meta-analysis of job analysis reliability. Journal of Applied Psychology, 88, 635-646.

Overview:

There is one common ground about which the law, science, and practice of personnel psychology all agree. This is the need for a thorough job analysis before developing selection and placement applications. In this module, we will review the science and practice of job analysis. In the process, you will learn an essential set of terms as well as something about the approach that is taken to analyzing jobs.

Additional terms:

Criterion development is the first step in developing and evaluating a selection system. It is the process of establishing conceptual and operational definitions for essential job functions. Conceptual definitions usually include clear verbal descriptions of performance dimensions. Operational definitions consist of the measures used to identify the level of individual performance in a reliable and accurate fashion. For example, for the job of graduate student, an important performance dimension would be completing important assignments thoroughly and accurately in a timely fashion. This is a statement of the conceptual definition of a performance criterion. The operational definition of this criterion might include several indicators, including the professor's evaluation of the accuracy and thoroughness of the assignment, as well as a computer-based indicator of when the assignment was submitted. It should be noted that, since performance is multi-dimensional, there will usually be multiple criterion measures for any job (and often for a single conceptual criterion).

Criterion development is accomplished as part of the job analysis process. After developing a list of task statements for a job, job analysts usually distribute surveys to job incumbents asking their views about the importance of each task statement to the job. Subject Matter Experts (SMEs) then make judgments about the importance of various Knowledge, Skills, Abilities, and Other characteristics (KSAOs) to the accomplishment of essential job functions. These KSAOs are then used to develop selection devices, and clusters of tasks (duties) and KSAOs are used to develop criterion dimensions. Measures of criterion performance are then used to evaluate the predictive accuracy of selection devices. Thus, criterion development is essential to criterion-based validation of selection devices.

Important terms from readings:

Job, position, task, KSAO, task- and KSAO-oriented job analysis, position description, job specifications, task statement, task questionnaire, SME, task-KSAO linkage

---------------------

QUESTION

For this assignment, I want you to conduct a “mini job analysis” for the job your currently hold or the job you have most recently had. This assignment is worth 15 points. The first section is worth 10 points, the second section is worth 5 points.

For this assignment, consider yourself a subject matter expert (SME) and do the following:

1. Construct a job requirements matrix that includes

  • Specific tasks

oYou must have at least 10 specific tasks. Make sure your task statements are written correctly. Heneman and Judge provide very nice examples of techniques for writing task statements. O*NET statements do not go into as much depth as do the task statements that Heneman and Judge present. I would prefer that, for this part of the assignment, you model your work after Heneman and Judge’s suggestions rather than after O*NET.

  • Task dimensions / duties

oYou must have at least 3 task dimensions / duties that encompass the specific tasks.

  • Importance (indicate what your scale is, whether it is 0-100, percent of time spent on task, relative amount spent with Likertformat, etc. Just be sure you are clear about how you are operationalizing the scale)
  • Knowledge (use O*NET knowledge areas – listed in H&J and on O*NET site)
  • Skills (use O*NET skills – listed in H&J and on O*NET site)
  • Abilities (use O*NET skills – listed in H&J and on O*NET site)
  • The importance of each of the KSAs (indicate what your scale is, whether it is 0-100, whether it is needed or if it can be trained ~ an issue in the Jones et al. article, etc. Just be sure you are clear about how you are operationalizing the scale)
  • The context in which the job occurs (use O*NET skills – listed in H&J and on O*NET site)

Hint: Use Heneman & Judge’s (2009) example as your guide on how to format the matrix, but be sure to look at O*NET because there is a lot of useful information and many examples of task statements and KSAs.

2. Create a job description for your current or most recent job. Although there is no standard job description, for this assignment it must include the following:

  • Job Title
  • Job Summary
  • Essential functions (duties linked with tasks)
  • Qualifications
  • Job Context

You can add more to the job description, but you must at least have the above items. Also, the job description should be on its own page, no more than one page, and be aesthetically pleasing.

Unformatted Attachment Preview

Journal of Applied Psychology 2003, Vol. 88, No. 4, 635– 646 Copyright 2003 by the American Psychological Association, Inc. 0021-9010/03/$12.00 DOI: 10.1037/0021-9010.88.4.635 A Meta-Analysis of Job Analysis Reliability Erich C. Dierdorff and Mark A. Wilson North Carolina State University Average levels of interrater and intrarater reliability for job analysis data were investigated using meta-analysis. Forty-six studies and 299 estimates of reliability were cumulated. Data were categorized by specificity (generalized work activity or task data), source (incumbents, analysts, or technical experts), and descriptive scale (frequency, importance, difficulty, time-spent, and the Position Analysis Questionnaire). Task data initially produced higher estimates of interrater reliability than generalized work activity data and lower estimates of intrarater reliability. When estimates were corrected for scale length and number of raters by using the Spearman-Brown formula, task data had higher interrater and intrarater reliabilities. Incumbents displayed the lowest reliabilities. Scales of frequency and importance were the most reliable. Implications of these reliability levels for job analysis practice are discussed. some showing significant evidence of moderation, and others displaying none. It is interesting to note that only recently have the definitions of reliable and valid job information received directed attention and discourse (Harvey & Wilson, 2000; Morgeson & Campion, 2000; Sanchez & Levine, 2000). More recent research has tended to frame the quality of job analysis data through views ranging from various validity issues (Pine, 1995; Sanchez & Levine, 1994), to potential social and cognitive sources of inaccuracy (Henry & Morris, 2000; Morgeson & Campion, 1997), to the merits of job analysis and consequential validity (Sanchez & Levine, 2000), and to an integrative approach emphasizing both reliability and validity examinations (Harvey & Wilson, 2000; Wilson, 1997). As an important component of data quality, we sought to specifically examine the role of reliability in relation to job analysis data quality. Since mandating the legal requirements for the use of job analyses (Uniform Guidelines for Employee Selection Procedures, 1978), the importance of obtaining job analysis data and assessing the reliability of such data has become a salient issue to both practitioners and researchers. It has been estimated that large organizations spend between $150,000 and $4,000,000 annually on job analyses (Levine, Sistrunk, McNutt, & Gael, 1988). Furthermore, it appears probable that job analysis data will continue to undergo increasing legal scrutiny regarding issues of quality, similar to the job-relatedness of performance appraisal data, which have already seen a barrage of court decisions during the past decade (Gutman, 1993; Werner & Bolino, 1997). Considering the widespread utility implications, legal issues, and organizational costs associated with conducting a job analysis, it would seem safe to assume that the determination of the general expected level of job analysis data reliability should be of primary importance to any user of this type of work information. Prior literature has lamented the paucity of systematic research investigating reliability issues in job analysis (Harvey, 1991; Harvey & Wilson, 2000; Morgeson & Campion, 1997; Ployhart, Schmitt, & Rogg, 2000). Most research delving into the reliability and validity of job analysis has been in a search for moderating variables of individual characteristics, such as demographic variables like sex, race, or tenure (e.g., Borman, Dorsey, & Ackerman, 1992; Landy & Vasey, 1991; Richmann & Quinones, 1996) or other variables like performance and cognitive ability (e.g., Aamodt, Kimbrough, Keller, & Crawford, 1982; Harvey, Friedman, Hakel, & Cornelius, 1988; Henry & Morris, 2000). The overall conclusions of these research veins have been mixed, with Purpose The principal purpose of this study was to provide insight into the average levels of reliability that one could expect of job analysis data. Coinciding with this purpose were more specific examinations of the reliability expectations given different data specificity, various sources of data, variety of descriptive scales, and techniques of reliability estimation. The hope embedded in estimating average levels of reliability was that these data may in turn inspire greater attention to the reliability of job analysis data, as well as be used as reference points when examining the reliability of such data. We feel that not enough empirical attention has been paid to this issue, and that the availability of such reliability reference points could be of particular importance to practitioners conducting job analyses. To date, no such estimates have been available, and practitioners have had no means of comparison with which to associate the reliability levels they may have obtained. Moreover, elucidation of the levels of reliability across varying data specificity, data sources, and descriptive scales would provide useful information regarding decisions surrounding the method, sample, format, and overall design of a job analysis project. Erich C. Dierdorff and Mark A. Wilson, Department of Psychology, North Carolina State University. This research was conducted in partial fulfillment of the requirements for the master’s degree at the North Carolina State University by Erich C. Dierdorff. Correspondence concerning this article should be addressed to Mark A. Wilson, Department of Psychology, North Carolina State University, P.O. Box 7801, Raleigh, North Carolina 27695-7801. E-mail: mark_wilson@ncsu.edu 635 636 DIERDORFF AND WILSON Scope and Classifications Work information may range from attributes of the work performed to the required attributes of the workers themselves. Unfortunately, this common collective conception of work information (job-oriented vs. worker-oriented) can confound two distinctive realms of data. Historically, Dunnette (1976) described these realms as “two worlds of human behavioral taxonomies” (p. 477). Dunnette’s two worlds referred to the activities required by the job and the characteristics of the worker deemed necessary for successful performance of the job. More recently, Harvey and Wilson (2000) contrasted “job analysis” versus “job specification,” with the former collecting data about work activities and the latter collecting data describing worker attributes presumably required for job performance. The present study focused only on reliability evidence obtained through data that described the activities performed within a given work role (i.e., job analysis). This parameter allowed the study’s investigations to examine the reliability of data that carry the feasibility of verification through observation, as opposed to latent worker attributes typically described by job specification data. The primary classification employed by the present study delineated job analysis data by two categories of specificity: task and general work activity (GWA). These classifications were not meant to be all-inclusive but rather were meant to capture the majority of job analysis data. Task-level data were defined as information that targets the more microdata specificity (e.g., “cleans teeth using a water-pick” or “recommends medication treatment schedule to patient”). In contrast, GWA-level data were defined similarly to the description offered by Cunningham, Drewes, and Powell (1995), portraying GWAs as “generic descriptors,” including general activity statements applicable across a range of jobs and occupations (e.g., “estimating quantity” or “supervising the work of others”). An important caveat to data inclusion was that only GWAs relating to the work performed within a job were used, thus excluding what have been referred to as “generalized worker requirements” such as knowledge, skills, and abilities (KSAs; McCormick, Jeanneret, & Mecham, 1972). By separately coding tasks and GWAs, the present study allowed an investigation of job-analysis reliability relative to the specificity domain of the collected data. Prior literature has suggested that job-analysis data specificity may affect the reliability of such data (Harvey & Wilson, 2000; K. F. Murphy & Wilson, 1997), with more specific data showing higher reliability levels. Moreover, with the increasing prevalence of “competency” modeling, which in part incorporates more general levels of behavioral information (Schippmann, 1999), as well as the recent push to more generic activities for purposes of job and occupation analysis (Cunningham, 1996), the separate examination of GWAs allowed for interpretative comparisons with increasingly prevalent and contemporary job and occupation analysis approaches. In addition to data specificity, the present study incorporated a classification for the source from which the data were generated. Sources of job-analysis information were classified into three groupings: (1) incumbents, (2) analysts, and (3) technical experts. Incumbent sources referred to job information derived from jobholders. These data were usually collected through self-report surveys and inventories. Analyst derived job information was from nonjobholder professional job analysts. These data were generally gathered through methods such as observation and interviewing and were then used to complete a formal job-analysis instrument (i.e., Position Analysis Questionnaire [PAQ]). The third source group, technical experts, captured data obtained through individuals defined specifically as training specialists, supervisors, or higher level managers (Landy & Vasey, 1991). Because many technical experts can also be considered job incumbents, this designation was reserved only for data that were explicitly described as being collected from technical experts, supervisors, or some other “senior level” source. By source-coding reliability evidence, analyses could reveal any changes in the magnitude of reliability estimates in relation to these common sources of jobanalysis data. Prior empirical investigation has suggested the possibility of differential levels of reliability across various classifications of respondents (Green & Stutzman, 1986; Henry & Morris, 2000; Landy & Vasey, 1991), such as performance level of the incumbent and various demographic characteristics of subject matter experts. The present research sought to compare the reliability levels across sources rather than only within a given source as in previous research. A third classification was used to categorize the type of descriptive scale upon which a job was analyzed. Some common examples of descriptive scales are time spent on task, task importance, and task difficulty (Gael, 1983). Past research has suggested that the variety of scales used in job analysis yield different average reliability coefficients (Birt, 1968). For instance, scales of frequency of task performance and task duration have displayed reliabilities ranging from the .50s to the .70s (McCormick & Ammerman, 1960; Morsh, 1964). Difficulty scales have generally been found to have lower reliabilities than other descriptive scales, with estimates ranging from the .30s to the .50s (McCormick & Ammerman, 1960; McCormick & Tombrink, 1960; Wilson, Harvey, & Macy, 1990). Thus, data were coded for the commonly used descriptive scales of frequency, importance, difficulty, and time spent. GWA data derived from the PAQ (McCormick, Jeanneret, & Mecham, 1972), which is arguably the most widely used and researched generic job analysis instrument, were additionally coded. To allow for a comparative analysis of reliability across the three aforementioned classifications, it was necessary to group coefficients into appropriate estimation categories. Therefore, reliability estimates were delineated by their computational approach. Two approaches commonly used in job analyses to estimate reliability were chosen as the categories employed by this study. Both types of reliability estimation are discussed in the ensuing section. Types of Reliability Estimates Used in Job Analysis The two most commonly used forms of reliability estimation are interrater and intrarater reliability (Viswesvaran, Ones, & Schmidt, 1996). In the context of job analysis practice, interrater reliability seems to be the more prevalent of the two techniques. Interrater reliability identifies the degree to which different raters (i.e., incumbents) agree on the components of a target work role or job. Interrater reliability estimations are essentially indices of rater covariation. This type of estimate can portray the overall level of consistency among the sample raters involved in the job analysis effort. Typically, interrater reliability is assessed using either Pear- META-ANALYSIS OF JOB ANALYSIS RELIABILITY son correlations or intraclass correlations (ICC; see Shrout & Fleiss, 1979, for a detailed discussion). Most previous empirical literature has focused on the intrarater reliability of job analysis data. Two forms of intrarater reliability commonly employed within job analysis are repeated item and rate–rerate of the same job at different points in time. Both of these estimates may be viewed as coefficients of stability (Viswesvaran et al., 1996). The repeated items approach can display the consistency of a rater across a particular job analysis instrument (i.e., task inventory), whereas the rate–rerate technique assesses the extent to which there is consistency across two administrations. Intrarater reliability is typically assessed using Pearson correlations. Research Questions We examined reliability from previously conducted job analyses by using the four aforementioned classifications. To explore this purpose, we used meta-analytic procedures. The purpose of these meta-analyses was to estimate the average reliability that one could expect when gathering work information through a job analysis at different data specificities from different sources and when using various descriptive scales. In short, we sought to investigate the following questions: What are the mean estimates of reliability for job analysis information, and how do these estimates differ in magnitude across data specificity, data source, and descriptive scale? Are the levels of interrater reliability higher or lower than levels of intrarater reliability? Finally, does the source of the job analysis information or the choice of descriptive scale affect the magnitudes of reliability estimates? Method Database We conducted a literature search using standard and supplementary techniques in an attempt to lessen the effect of the “file drawer” problem— the increased probability of positive findings in published literature (Rosenthal, 1979). In the case of job analysis research, this could result in unrealistically high estimations of reliability. In addition, many empirical studies about or using job analysis data only report reliability estimations as side bars to the main topic, thus making it more difficult to locate these sources of reliability data. Using the standard technique, we used the Internet and other computer-based resources. Some examples of these sources were PsycINFO, PsychLit, job analysis–related Web sites and listserves, the National Technical Information Services database, as well as other online and offline library databases. Within these sources, we used keyword searches with terms such as “job analysis, job analysis accuracy, job analysis reliability, work analysis, and job information accuracy.” The majority of reliability data that we used in this study were gathered with this method. The supplementary technique, meant to expand the breadth of the literature search, used both ancestry and descendency approaches (Cooper, 1984), as well as correspondence with researchers in the field of job analysis. The supplementary approach produced a substantial amount of reliability data in the form of technical reports and unpublished job analyses. Table 1 displays descriptive statistics of the included studies. Analyses To be included in the meta-analyses, studies were first required to describe the approach used to assess reliability of the job data. Those that did not assess reliability according to the aforementioned estimation types were excluded. Second, the sample size used in the reliability estimation 637 Table 1 Descriptive Summary of Collected Data Data category No. of studies No. of reliability estimates Interrater reliability Specificity Task GWA Source Incumbent Analyst Technical expert Scale Frequency Importance Difficulty Time spent PAQ Intrarater reliability Specificity Task GWA Source Incumbent Analyst Technical expert Scale Frequency Importance Difficulty Time spent Publication type Journal Technical report Book Dissertation 31 214 16 15 119 95 16 100 10 9 8 10 3 6 8 10 11 10 23 83 10 5 49 36 12 4 2 42 31 6 5 4 5 6 13 4 7 10 26 10 1 2 205 87 3 4 Note. GWA ⫽ generalized work activity; PAQ ⫽ Position Analysis Questionnaire. was required. Third, studies were required to assess the requirements of the job itself, not merely attributes of the workers. Once the pool of studies was assembled, we coded the data for the purposes of a comparative analysis. Coding allowed for us to conduct separate meta-analyses within each of the study’s classifications, hence making the average correlation generated within each grouping more empirically justified. Two raters independently coded the gathered studies according to the four aforementioned classifications. Interrater agreement of study coding was 98%. Disagreements were resolved through discussion, and no additional exclusions were necessary. We conducted a meta-analysis correcting only for sampling error for each of the distributions gleaned from the study’s classifications. When cumulating reliability across several past empirical studies, it may be necessary to determine whether a need to adjust results from various studies to a common length of items or number of raters (interrater reliability) or to a common time interval (intrarater reliability) is required. Two available options were to use the Spearman-Brown formula to bring all estimates to a common length or to use previous research investigating the functional relationship between time intervals and job analysis reliability. The present study conducted meta-analyses both with and without the Spearman-Brown corrections of individual reliability estimates. Without evidence of the functional relationship affecting intrarater reliability of job analysis data, the only statement able to be proffered is that as the time interval increases, reliability generally decreases (Viswesvaran et al., 1996). Thus, no meta-analytic corrections were made to bring estimates of 638 DIERDORFF AND WILSON intrarater reliability to a common time interval. However, intrarater reliability in job analysis can be derived from either a rate–rerate or a repeated item approach. Therefore, to display the potential effects of time and allow comparison between these two common forms of intrarater reliability, separate meta-analyses for repeated item and rate–rerate reliabilities were conducted. The mean time interval for rate–rerate data from the gathered studies was 6.5 weeks and had a range of 1–16 weeks. Rate–rerate data comprised 84% of the collected intrarater reliability data and repeated item data made up the remaining 16%. As for a common length of items or number of raters, the body of literature on job analysis procedures does not concede a particular recommendation. Suggestions for item length and number of raters varies depending on the organization, project purposes, and the practical limitations of the project (Levine & Cunningham, 1999). Therefore, to portray ...
Purchase answer to see full attachment

Tutor Answer

DoctorDickens
School: Carnegie Mellon University

Attached.

Running head: JOB ANALYSIS

1

Job analysis
Name of the student
Institution
Name of the professor
Date of submission

JOB ANALYSIS

TASK
Specific Tasks

1. Manage the
production of the
required project
deliverables.
2. Manage project
risks, including
the development
of alternative
action plans.
3. Planning, control
and management
of the project.
4. Design and apply
the suitable
project
management
standards for
unification in the
NI Gateway
Review Process.
5. Prepare and
maintain project,
stage and
exception plans
as required.
6. Identify and
obtain sustenance
and advice
needed for the
project.
7. Liaise with the
appointed project
assurance
repr...

flag Report DMCA
Review

Anonymous
Tutor went the extra mile to help me with this essay. Citations were a bit shaky but I appreciated how well he handled APA styles and how ok he was to change them even though I didnt specify. Got a B+ which is believable and acceptable.

Similar Questions
Hot Questions
Related Tags
Study Guides

Brown University





1271 Tutors

California Institute of Technology




2131 Tutors

Carnegie Mellon University




982 Tutors

Columbia University





1256 Tutors

Dartmouth University





2113 Tutors

Emory University





2279 Tutors

Harvard University





599 Tutors

Massachusetts Institute of Technology



2319 Tutors

New York University





1645 Tutors

Notre Dam University





1911 Tutors

Oklahoma University





2122 Tutors

Pennsylvania State University





932 Tutors

Princeton University





1211 Tutors

Stanford University





983 Tutors

University of California





1282 Tutors

Oxford University





123 Tutors

Yale University





2325 Tutors