Exercise Questions

User Generated

Ohgxhf65

Business Finance

Description

A minimum of 250 words is required for each question. Please make sure you use proper academic sources and proper APA standards for each answer. A minimum of two outside sources are required for each question. (5 questions)

I have attached the chapter reading but any outside sources can be used.

1. Describe some of the most common pitfalls in using metrics for evaluating the effectiveness of a selection system.

2. What are the four main stages in developing effective policies and procedures? Why is each stage important?

3. Review the major negative consequences of employee turnover, and also consider the potential positive consequences of turnover.

4. What is the "turnover train?" What are the factors that lead people to start the process of turnover at each stage, and what can organizations do to try to encourage people to get off the train at each stage?

5. What are the most effective initiatives for reducing turnover?

Unformatted Attachment Preview

A minimum of 250 words is required for each question. Please make sure you use proper academic sources and proper APA standards for each answer. A minimum of two outside sources are required for each question. (5 questions) I have attached the chapter reading but any outside sources can be used. 1. Describe some of the most common pitfalls in using metrics for evaluating the effectiveness of a selection system. 2. What are the four main stages in developing effective policies and procedures? Why is each stage important? 3. Review the major negative consequences of employee turnover, and also consider the potential positive consequences of turnover. 4. What is the "turnover train?" What are the factors that lead people to start the process of turnover at each stage, and what can organizations do to try to encourage people to get off the train at each stage? 5. What are the most effective initiatives for reducing turnover? The Staffing Organizations Model Organization Mission Goals and Objectives D A I Organization Strategy L Insert unnumbered figure P501 Y , HR and Staffing Strategy Staffing Policies and Programs Support Activities Legal compliance Planning Job analysis and rewards Core Staffing Activities R Y Recruitment: external, internal Selection: measurement, external, internal A Employment: decision making, final match N Staffing System and Retention Management 2 6 7 5 B U hen12680_ch11_538-578.indd 538 3/30/11 9:34 AM Pa r t F i v e Staffing Activities: Employment Chapter Eleven Decision Making C h a p t e r T w e lv e Final Match D A I L Y , R Y A N 2 6 7 5 B U hen12680_ch11_538-578.indd 539 3/30/11 9:34 AM D A I L Y , R Y A N 2 6 7 5 B U hen12680_ch11_538-578.indd 540 3/30/11 9:34 AM Chapter Eleven Decision Making D Learning Objectives and Introduction A Learning Objectives Introduction I Choice of Assessment Method L Validity Coefficient Y Face Validity Correlation With Other Predictors , Adverse Impact Utility R Determining Assessment Scores Y Single Predictor Multiple Predictors A Hiring Standards and Cut Scores N Description of the Process Consequences of Cut Scores 2 Methods to Determine Cut Scores Professional Guidelines 6 Methods of Final Choice Random Selection Ranking Grouping Ongoing Hiring 7 5 B U Decision Makers Human Resource Professionals Managers Employees Legal Issues Uniform Guidelines on Employee Selection Procedures Diversity and Hiring Decisions hen12680_ch11_538-578.indd 541 3/30/11 9:34 AM Summary Discussion Questions Ethical Issues Applications Tanglewood Stores Case D A I L Y , R Y A N 2 6 7 5 B U hen12680_ch11_538-578.indd 542 3/30/11 9:34 AM Chapter Eleven Decision Making 543 Learning Objectives and Introduction Learning Objectives • • • • • • • Be able to interpret validity coefficients Estimate adverse impact and utility of selection systems Learn about methods for combining multiple predictors Establish hiring standards and cut scores Evaluate various methods of Dmaking a final selection choice Understand the roles of various decision makers in the staffing process Adiversity concerns in the staffing process Recognize the importance of I L Introduction The preceding chapters describedYa variety of techniques that organizations can use to assess candidates. However, ,collecting data on applicants does not ultimately lead to a straightforward conclusion about who should be selected. Should interviews take precedence over standardized ability tests? Should job experience be R the primary focus of selection decisions, or will organizations make better choices if experience ratings are supplemented with data on personality? What role should Y experience and education have in selection? In this chapter, we’ll discuss how this information can be used to makeAdecisions about who will ultimately be hired. As we will see, subjective factors often N enter into the decision process. Having methods to resolve any disputes that arise in the process of evaluating candidates in advance can greatly facilitate efficient decision making and reduce conflict among 2 members of the hiring committee. When it comes to making final 6 decisions about candidates, it is necessary to understand the nature of the organization and the jobs being staffed. Organizations 7 needs for customer service might put a stronger that have strong cultures and heavy emphasis on candidate personality 5 and values. For jobs with a stronger technical emphasis, it makes more sense to evaluate candidates on the basis of demonstrated B knowledge and skills. Throughout this chapter, you’ll want to consider how your U factor into staffing decision making. own organization’s strategic goals The process of translating predictor scores into assessment scores is broken down into a series of subtopics. First, techniques for using single predictors and multiple predictors are discussed. The process used to determine minimum standards (a.k.a. “cut scores”) will be described, as well as the consequences of cut scores and methods to determine cut scores. Methods of final choice must be considered to determine who from among the finalists will receive a job offer. For all the preceding decisions, consideration must be given to who should be involved in the decision process. Finally, legal issues should also guide the decision making. hen12680_ch11_538-578.indd 543 3/30/11 9:34 AM 544 Part Five Staffing Activities: Employment Particular consideration will be given to the Uniform Guidelines on Employee Selection Procedures (UGESP) and to the role of diversity considerations in hiring decisions. Choice of Assessment Method In our discussions of external and internal selection methods, we listed multiple criteria to consider when deciding which method(s) to use (e.g., validity, utility). D Some of these criteria require more amplification, specifically validity, correlation with other predictors (newly discussed here), adverse impact, and utility. A I L Validity refers to the relationship between predictor and criterion scores. Often this relationship is assessed using aYcorrelation (see Chapter 7). The correlation between predictor and criterion scores , is known as a validity coefficient. The use- Validity Coefficient fulness of a predictor is determined on the basis of the practical significance and statistical significance of its validity coefficient. As was noted in Chapter 7, reliability is a necessary condition for validity. Selection measures with questionable R reliability will have questionable validity. Y A Practical Significance Practical significance refers to the extent N to which the predictor adds value to the prediction of job success. It is assessed by examining the sign and the magnitude of the validity coefficient. 2 Sign. The sign of the validity coefficient refers to the direction of the relationship 6 between the predictor and the criterion. A useful predictor is one where the sign 7 and is consistent with the logic or theory of the relationship is positive or negative behind the predictor. 5 Magnitude. The magnitude of theBvalidity coefficient refers to its size. It can range from 0 to 1.00, where a coefficient U of 0 is least desirable and a coefficient of 1.00 is most desirable. The closer the validity coefficient is to 1.00, the more useful the predictor. Predictors with validity coefficients of 1.00 are not to be expected, given the inherent difficulties in predicting human behavior. Instead, as shown in Chapters 8 and 9, validity coefficients for current assessment methods range from 0 to about .60. Any validity coefficient above 0 is better than random selection and may be somewhat useful. Validities above .15 are moderately useful, and validities above .30 are highly useful. hen12680_ch11_538-578.indd 544 3/30/11 9:34 AM Chapter Eleven Decision Making 545 Statistical Significance Statistical significance, as assessed by probability or p values (see Chapter 7), is another factor that should be used to interpret the validity coefficient. If a validity coefficient has a reasonable p value, chances are good that it would yield a similar validity coefficient if the same predictor was used with different sets of job applicants. That is, a reasonable p value indicates that the method of prediction, rather than chance, produced the observed validity coefficient. Convention has it that a reasonable level of significance is p < .05. This means there are fewer than 5 chances in 100 of concluding there is a relationship in the population of job applicants when, in fact, there is not. D Caution must be exercised in A using statistical significance as a way to gauge the usefulness of a predictor. Research has clearly shown that nonsignificant validity I coefficients may simply be due to the small samples of employees used to calcuL late the validity coefficient. Rejecting the use of a predictor solely on the basis of a small sample may lead to Y rejecting a predictor that would have been quite acceptable had a larger sample of employees been used to test for validity.1 These , have led some researchers to recommend the use concerns over significance testing of “confidence intervals,” for example, showing that one can be 90% confident that the true validity is no less than .30 and no greater than .40.2 R Y Face Validity Face validity concerns whether A the selection measure appears valid to the applicant. Face validity is potentiallyNimportant to selection decision making in gen- eral, and choice of selection methods in particular, if it affects applicant behavior (willingness to continue in the selection process, performance, and turnover once 2 are closely associated with applicant reactions.3 hired). Judgments of face validity 6 7 If a predictor is to be considered useful, 5 it must add value to the prediction of job success. To add value, it must add to the prediction of success above and beyond the foreB In general, a predictor is more useful if it has a casting powers of current predictors. smaller correlation with other predictors and a higher correlation with the criterion. U Correlation With Other Predictors To assess whether the predictor adds anything new to forecasting, a matrix showing all the correlations between the predictors and the criteria should always be generated. If the correlations between the new predictor and the existing predictors are higher than the correlations between the new predictor and the criterion, the new predictor is not adding much that is new. There are also relatively straightforward techniques, such as multiple regression, that take the correlation among predictors into account.4 hen12680_ch11_538-578.indd 545 3/30/11 9:34 AM 546 Part Five Staffing Activities: Employment Predictors are likely to be highly correlated with one another when their domain of content is similar. For example, both biodata and application blanks may focus on previous training received. Thus, using both biodata and application blanks as predictors may be redundant, and neither one may augment the other much in predicting job success. Adverse Impact A predictor discriminates between people in terms of the likelihood of their success on the job. A predictor may also discriminate by screening out a disproportionate D number of minorities and women. To the extent that this happens, the predictor has adverse impact, and it may result in A legal problems. As a result, when the validity I one predictor has less adverse impact than of alternative predictors is the same and the other predictor, the one with lessLadverse impact should be used. A very difficult judgment call arises when one predictor has high validity and high adverse impact while another Y predictor has low validity and low adverse impact. From the perspective of accurately predicting job performance, the former , predictor should be used. From an equal employment opportunity and affirmative action (EEO/AA) standpoint, the latter predictor is preferable. Balancing the R of the organization’s staffing philosophy ­trade-­offs is difficult and requires use regarding EEO/AA. Later in this chapter Y we consider some possible solutions to this important problem. A N Utility Utility refers to the expected gains to be derived from using a predictor. Expected gains are of two types: hiring success and economic. 2 6 Hiring Success Gain Hiring success refers to the proportion 7 of new hires who turn out to be successful on the job. Hiring success gain refers to the increase in the proportion of success5 as a result of adding a new predictor to the ful new hires that is expected to occur B system yields a success rate of 75% for new selection system. If the current staffing hires, how much of a gain in this success U rate will occur by adding a new predic- tor to the system? The greater the expected gain, the greater the utility of the new predictor. This gain is influenced not only by the validity of the new predictor (as already discussed) but also by the selection ratio and base rate. Selection Ratio. The selection ratio is simply the number of people hired divided by the number of applicants (sr = number hired / number of applicants). The lower the selection ratio, the more useful the predictor. When the selection ratio is low, the organization is more likely to be selecting successful employees. If the selection ratio is low, then the denominator is large or the numerator is small. Both conditions are desirable. A large denominator means that the orga- hen12680_ch11_538-578.indd 546 3/30/11 9:34 AM Chapter Eleven Decision Making 547 nization is reviewing a large number of applicants for the job. The chances of identifying a successful candidate are much better in this situation than when an organization hires the first available person or reviews only a few applicants. A small numerator indicates that the organization is being very stringent with its hiring standards. The organization is hiring people likely to be successful rather than hiring anyone who meets the most basic requirements for the job; it is using high standards to ensure that the very best people are selected. Base Rate. The base rate is defined as the proportion of current employees who are successful on some criterionDor human resource (HR) outcome (br = number of successful employees / number A of employees). A high base rate is desired for obvious reasons. A high base rate may come about from the organization’s staffI with other HR programs, such as training and ing system alone or in combination compensation. L When considering possible use of a new predictor, one issue is whether the Y proportion of successful employees (i.e., the base rate) will increase as a result of , using the new predictor in the staffing system. This is the matter of hiring success gain. Dealing with it requires simultaneous consideration of the organization’s current base rate and selection ratio, as well as the validity of the new predictor. R address this issue. An excerpt is shown in The ­Taylor-­Russell tables help Exhibit 11.1. Y A N Exhibit 11.1 Excerpt From the ­Taylor-­Russell Tables A. Validity .20 .60 B. 2 6 7 5 B U Base Rate .30 Selection Ratio .10 .70 43% 77 33 40 Base Rate .80 Selection Ratio Validity .10 .70 .20 .60 89% 99 83 90 Source: H. C. Taylor and J. T. Russell, “The Relationship of Validity Coefficients to the Practical Effectiveness of Tests in Selection,” Journal of Applied Psychology, 1939, 23, pp. 565–578. hen12680_ch11_538-578.indd 547 3/30/11 9:34 AM 548 Part Five Staffing Activities: Employment The cells in the tables show the percentage of new hires who will turn out to be successful. This is determined by a combination of the validity coefficient for the new predictor, the selection ratio, and the base rate. The top matrix (A) shows the percentage of successful new hires when the base rate is low (.30), the validity coefficient is low (.20) or high (.60), and the selection ratio is low (.10) or high (.70). The bottom matrix (B) shows the percentage of successful new hires when the base rate is high (.80), the validity coefficient is low (.20) or high (.60), and the selection ratio is low (.10) or high (.70). Two illustrations show how these tables may be used. D the decision whether to use a new test to The first illustration has to do with select computer programmers. Assume A that the current test has a validity coefficient of .20. Also assume that a consulting firm has approached the organization I with a new test that has a validity coefficient of .60. Should the organization purL chase and use the new test? At first blush, the answer might seem Y to be yes, because the new test has a substantially higher level of validity. This initial reaction, however, must be gauged in the context of the selection ratio ,and the current base rate. If the current base rate is .80 and the current selection ratio is .70, then, as can be seen in matrix B of Exhibit 11.1, the new selection procedure will only result in a hiring success gain from 83% to 90%. The organization R may already have a very high base rate due to Y quite well (e.g., training, rewards). Hence, other facets of HR management it does even though it has validity of .20, the A base rate of its current predictor is already .80. On the other hand, if the existing base rate of the organization is .30 and the N existing selection ratio is .10, the organization should strongly consider the new test. As shown in matrix A of Exhibit 11.1, the hiring success gain will go from 43% to 77% with the addition of the new test. 2 Russell tables has to do with recruitment A second illustration using the ­Taylor-­ in conjunction with selection. Assume 6that the validity of the organization’s current predictor, a cognitive ability test, is 7.60. Also assume that a new college recruitment program has been very aggressive. As a result, there is a large swell in the 5 ratio has decreased from .70 to .10. The number of applicants, and the selection organization must decide whether toB continue the college recruitment program. An initial reaction may be that the program should be continued because of the U As shown in matrix A of Exhibit 11.1, this large increase in applicants generated. answer would be correct if the current base rate is .30. By decreasing the selection ratio from .70 to .10, the hiring success gain increases from 40% to 77%. On the other hand, if the current base rate is .80, the organization may decide not to continue the program. The hiring success increases from 90% to 99%, which may not justify the very large expense associated with aggressive college recruitment campaigns. The point of these illustrations is that when confronted with the decision of whether to use a new predictor, the decision depends on the validity coefficient, hen12680_ch11_538-578.indd 548 3/30/11 9:34 AM Chapter Eleven Decision Making 549 base rate, and selection ratio. They should not be considered independent of one another. HR professionals should carefully record and monitor base rates and selection ratios. Then, when management asks whether they should use a new predictor, the HR professionals can respond appropriately. The ­Taylor-­Russell tables may be used for any combination of validity coefficient, base rate, and selection ratio values. The values shown in Exhibit 11.1 are excerpts for illustration only; when other values need to be considered, the original tables should be consulted to provide the appropriate answers. D Economic Gain Economic gain refers to the ­bottom-­ A line or monetary impact of a predictor on the organization. The greater the economic gain the predictor produces, the more useI ful the predictor. Considerable work has been done over the years on assessing L predictors. The basic utility formula used to the economic gain associated with estimate economic gain is shownYin Exhibit 11.2. At a general level, this formula works as follows. Economic gains derived from , using a valid predictor versus random selection (the ­left-­hand side of the equation) depend on two factors (the ­right-­hand side of the equation). The first factor (the entry before the subtraction sign) R is the revenue generated by hiring productive employees using the new predictor. The second factor (the entry after the subY with using the new predictor. Positive ecotraction sign) is the costs associated nomic gains are achieved when A revenues are maximized and costs are minimized. Revenues are maximized by using the most valid selection procedures. Costs are N Exhibit 11.2 Economic Gain Formula 2 6 – ΔU (T × Nn × rxy × SDy × Zs) − (Na × Cy) 7 Where: 5 ΔU = expected dollar value increase to the organization using the predictor versus random selection B T = average tenure of employees in position U Nn = number of people hired rxy = correlation between predictor and job performance SD – y = dollar value of job performance Zs = average standard predictor score of selected group Na = number of applicants Cy = cost per applicant Source: Adapted from C. Handler and S. Hunt, “Estimating the Financial Value of Staffing-Assessment Tools,” Workforce Management, Mar. 2003 (www.workforce.com). hen12680_ch11_538-578.indd 549 3/30/11 9:34 AM 550 Part Five Staffing Activities: Employment ­ inimized by using the predictors with the least costs. To estimate actual ecom nomic gain, values are entered into the equation for each of the variables shown. Values are usually generated by experts in HR research relying on the judgments of experienced line managers. Several variations on this formula have been developed. For the most part, these variations require consideration of additional factors such as assumptions about tax rates and applicant flows. In all these models, the most difficult factor to estimate is the dollar value of job performance, which represents the difference between productive and nonproductive employees in dollar value terms. A variety of methD manager estimates of employee value to ods have been proposed, ranging from percentages of compensation (usually A 40% of base pay).5 Despite this difficulty, economic gain formulas represent a significant way of estimating the economic I use of a new (and valid) predictor. gains that may be anticipated with the L Limitations With Utility Analysis Y Although utility analysis can be a powerful method to communicate the ­bottom-­line implications of using valid selection ,measures, it is not without its limitations. Perhaps the most fundamental concern among researchers and practitioners is that utility estimates lack realism becauseRof the following: 1. Virtually every organization uses Y multiple selection measures, yet existing utility models assume that the decision is whether to use a single selection A measure rather than selecting applicants by chance alone.6 2. Many important variables are missing from the model, such as EEO/AA conN cerns and applicant reactions.7 3. The utility formula is based on many assumptions that are probably overly 2 does not vary over time;8 that nonperforsimplistic, including that validity mance criteria such as attendance, 6 trainability, applicant reactions, and fit are irrelevant;9 and that applicants are selected in a ­top-­down manner and all job 7 offers are accepted.10 5 Perhaps as a result of these limitations, several factors indicate that utility analysis B decisions about selection measures. For may have a limited effect on managers’ example, a survey of managers who U stopped using utility analysis found that 40% did so because they felt that utility analysis was too complicated, whereas 32% discontinued use because they believed that the results were not credible.11 Other studies have found that managers’ acceptance of utility analysis is low; one study found that reporting simple validity coefficients was more likely to persuade HR decision makers to adopt a particular selection method than was reporting utility analysis results.12 These criticisms should not be taken as arguments that organizations should ignore utility analysis when evaluating selection decisions. However, decision makers are much less likely to become disillusioned with utility analysis if they hen12680_ch11_538-578.indd 550 3/30/11 9:34 AM Chapter Eleven Decision Making 551 are informed consumers and realize some of the limitations inherent in such analyses. Researchers have the responsibility of better embedding utility analysis in the strategic context in which staffing decisions are made, while HR decision makers have the responsibility to use the most rigorous methods possible to evaluate their decisions.13 By being realistic about what utility analysis can and cannot accomplish, the potential to fruitfully inform staffing decisions will increase. Determining Assessment Scores D Single Predictor A Using a single predictor in selection decisions makes the process of determinI ing scores easy. In fact, scores on the single predictor are the final assessment L scores. Thus, concerns over how to combine assessment scores are not relevant when a single predictor is usedYin selection decisions. Although using a single predictor has the advantage of simplicity, there are some obvious drawbacks. First, , few employers would feel comfortable hiring applicants on the basis of a single attribute. In fact, almost all employers use multiple methods in selection decisions. A second and related reason for using R multiple predictors is that utility increases as the number of valid predictors used in selection decisions increases. In most cases, Y using two valid selection methods will result in more effective selection decisions A reasons, although basing selection decisions than using a sole predictor. For these on a single predictor is a simple N way to make decisions, it is rarely the best one. Multiple Predictors 2 Given the ­less-­than-perfect validities of predictors, most organizations use mul6 tiple predictors in making selection decisions. With multiple predictors, decisions must be made about combining the resultant scores. These decisions can be 7 addressed through consideration of compensatory, multiple hurdles, and combined 5 approaches. B U Compensatory Model With a compensatory model, scores on one predictor are simply added to scores on another predictor to yield a total score. This means that high scores on one predictor can compensate for low scores on another. For example, if an employer is using an interview and grade point average (GPA) to select a person, an applicant with a low GPA who does well in the interview may still get the job. The advantage of a compensatory model is that it recognizes that people have multiple talents and that many different constellations of talents may produce success on the job. The disadvantage of a compensatory model is that, at least for some jobs, the level of proficiency for specific talents cannot be compensated for hen12680_ch11_538-578.indd 551 3/30/11 9:34 AM 552 Part Five Staffing Activities: Employment by other proficiencies. For example, a firefighter requires a certain level of strength that cannot be compensated for by intelligence. In terms of using the compensatory model to make decisions, four procedures may be followed: clinical prediction, unit weighting, rational weighting, and multiple regression. The four methods differ from one another in terms of the manner in which predictor scores (raw or standardized) are weighted before being added together for a total or composite score. Exhibit 11.3 illustrates these procedures. In all four methods, raw scores are used to determine a total score. Standard scores (see Chapter 7) may need to be used rather than raw scores if each D predictor variable uses a different method of measurement or is measured under different conditions. Differences in weighting A methods are shown in the bottom part of Exhibit 11.3, and a selection system conI and recommendations is shown in the top sisting of interviews, application blanks, L on each predictor range from 1 to 5. Scores part. For simplicity, assume that scores on these three predictors are shown for Y three applicants. , prediction approach in Exhibit 11.3, note Clinical Prediction. In the clinical that managers use their expert judgment to arrive at a total score for each applicant. That final score may or may not be a simple addition of the three predictor scores shown in the exhibit. Hence, R applicant A may be given a higher total score than applicant B even though simple Y addition shows that applicant B had one more point (4 + 3 + 4 = 11) than applicant A (3 + 5 + 2 = 10). A Frequently, clinical prediction is done by initial screening interviewers or hiring N or may not have “scores” per se, but they managers. These decision makers may have multiple pieces of information on each applicant, and they make a decision on the applicant by taking everything into account. In initial screening decisions, this 2 summary decision is whether the applicant gets over the initial hurdle and passes on to the next level of assessment. For 6 example, when making an initial screening decision on an applicant, a manager at a ­fast-­food restaurant might subjectively 7 combine his or her impressions of various bits of information about the applicant 5 on the application form. A hiring manager for a professional position might focus on a finalist’s résumé and answers toBthe manager’s interview questions to decide whether to extend an offer to the finalist. U approach is that it draws on the experThe advantage to the clinical prediction tise of managers to weight and combine predictor scores. In turn, managers may be more likely to accept the selection decisions than if a mechanical scoring rule (e.g., add up the points) was used. Many managers believe that basing decisions on their experiences, rather than on mechanical scoring, makes them better at judging which applicants will be successful.14 The problem with this approach is that the reasons for the weightings are known only to the manager. Also, clinical predictions have generally been shown to be less accurate than mechanical decisions; although there are times when using intuition and a nuanced approach is necessary or the only option. In general, one is well advised to heed former GE CEO Jack hen12680_ch11_538-578.indd 552 3/30/11 9:34 AM Chapter Eleven Decision Making 553 Exhibit 11.3 Raw Scores for Applicants on Three Predictors Predictors Applicant Interview Applicant Blank Recommendation A B C 3 4 5 5 3 4 2 4 3 D Clinical Prediction A P1, P2, P3 Subjective assessment of qualifications Example: Select applicant A based on “gut I feeling” for overall qualification level. L Unit Weighting P1 + P2 + P3 = Total score Y Example: All predictor scores are added together. , Applicant A = 3 + 5 + 2 = 10 Applicant B = 4 + 3 + 4 = 11 Applicant C = 5 + 4 + 3 = 12 R Rational Weighting Y w1P1 + w2P2 + w3P3 = Total score A Example: Weights are set by manager judgment at w1 = .5, w2 = .3, w3 = .2 Applicant A = (.5 × 3) + (.3 × 5) + (.2 × N 2) = 3.4 Applicant B = (.5 × 4) + (.3 × 3) + (.2 × 4) = 3.7 Applicant C = (.5 × 5) + (.3 × 4) + (.2 × 3) = 4.3 2 Multiple Regression 6 a + b1P1 + b2P2 + b3P3 = Total score Example: Weights are set by statistical procedures at b1 = .09, b2 = .6, b3 = .2 7 Applicant A = .09 + (.9 × 3) + (.6 × 5) + (.2 × 2) = 6.19 5 Applicant B = .09 + (.9 × 4) + (.6 × 3) + (.2 × 4) = 6.29 Applicant C = .09 + (.9 × 5) + (.6 × 4) +B (.2 × 3) = 7.59 U Welch’s advice for making hiring decisions: “Fight like hell against . . . using your gut. Don’t!”15 Unit Weighting. With unit weighting, each predictor is weighted the same at a value of 1.00. As shown in Exhibit 11.3, the predictor scores are simply added together to get a total score. So, the total scores for applicants A, B, and C are 10, 11, and 12, respectively. The advantage to unit weighting is that it is a simple and straightforward process and makes the importance of each predictor explicit to hen12680_ch11_538-578.indd 553 3/30/11 9:34 AM 554 Part Five Staffing Activities: Employment decision makers. The problem with this approach is that it assumes each predictor contributes equally to the prediction of job success, which often is not the case. Rational Weighting. With rational weighting, each predictor receives a differential rather than equal weighting. Managers and other subject matter experts (SMEs) establish the weights for each predictor according to the degree to which each is believed to predict job success. These weights (w) are then multiplied by each raw score (P) to yield a total score, as shown in Exhibit 11.3. For example, the predictors are weighted .5, .3, and .2 for the interview, applicaD tion blank, and recommendation, respectively. This means managers think interviews are the most important predictors, followed by application blanks, and then A recommendations. Each applicant’s raw score is multiplied by the appropriate I weight to yield a total score. For example, the total score for applicant A is (.5)3 + L (.3)5 + (.2)2 = 3.4. The advantage to this approach isYthat it considers the relative importance of each predictor and makes this assessment explicit. The downside, however, is that , managers and SMEs to agree on the difit is an elaborate procedure that requires ferential weights to be applied. To make the process of rational weighting simpler, some organizations are R turning to ­computer-­aided decision tools. Williams Insurance Service utilizes a Y it easier for hiring managers to integrate program called ChoiceAnalyst to make information on a variety of candidateAcharacteristics into a single score.16 Recruiters select the predictor constructs that will be used to judge applicants, and provide decision weights for how importantN they think each construct should be in making a final choice. Scores on a variety of predictor measures are entered into the software, which produces a rank ordering of candidates. One advantage of these 2 ­software-­based solutions is that they can be explicitly developed to consider a vari6 how differences in perceptions of predicety of managerial preferences and assess tor importance can lead to differences 7 in final hiring decisions. 5 Multiple Regression. Multiple regression is similar to rational weighting in that the predictors receive different weights. B With multiple regression, however, the weights are established on the basis of statistical procedures rather than on judgU ments by managers or other SMEs. The statistical weights are developed from (1) the correlation of each predictor with the criterion, and (2) the correlations among the predictors. As a result, regression weights provide optimal weights in the sense that they will yield the highest total validity. The calculations result in a multiple regression formula like the one shown in Exhibit 11.3. A total score for each applicant is obtained by multiplying the statistical weight (b) for each predictor by the predictor (P) score and summing these along with the intercept value (a). As an example, assume the statistical weights are .9, .6, and .2 for the interview, application blank, and recommendation, respec- hen12680_ch11_538-578.indd 554 3/30/11 9:34 AM Chapter Eleven Decision Making 555 tively, and that the intercept is .09. Using these values, the total score for applicant A is .09 + (.9)3 + (.6)5 + (.2)2 = 6.19. Multiple regression offers the possibility of a higher degree of precision in the prediction of criterion scores than do the other methods of weighting. Unfortunately, this level of precision is realized only under a certain set of circumstances. In particular, for multiple regression to be more precise than unit weighting, there must be a small number of predictors, low correlations between predictor variables, and a large sample that is similar to the population that the test will be used on.17 Many selection settings do not meet these criteria, so in these cases consideration D weighting, or to alternative ­regression-­based should be given to unit or rational weighting schemes that have been Adeveloped—general dominance weights or relative importance weights.18 In situations where these conditions are met, however, multiple regression weights canIproduce higher validity and utility than the other L weighting schemes. Y Choosing Among Weighting Schemes. Choosing among the different weighting schemes is important because, how various predictor combinations are weighted is critical in determining the usefulness of the selection process. Despite the limitations of regression weighting schemes noted above, one analysis of actual selecR scores on cognitive ability and integrity tests tion measures revealed that when were combined by weighting them Y equally, the total validity increased to .65, an increase of 27.6% over the validity of the cognitive ability test alone.19 When scores A were weighted according to multiple regression, however, the increase in validity N do not prove that multiple regression weightbecame 28.2%. While these results ing is a superior method in all circumstances, they do help illustrate that the choice of the best weighting scheme is consequential and likely depends on answers to 2 clinical, unit, rational, and multiple regression the most important questions about schemes (in that order): 6 • Do selection decision makers 7 have considerable experience and insight into selection decisions, and is managerial acceptance of the selection process 5 important? B each predictor contributes relatively equally to • Is there reason to believe that job success? U • Are there adequate resources to use relatively involved weighting schemes such as rational weights or multiple regression? • Are the conditions under which multiple regression is superior (relatively small number of predictors, low correlations among predictors, and large sample) satisfied? Answers to these questions—and the importance of the questions themselves— will go a long way toward deciding which weighting scheme to use. We should also note that while statistical weighting is more valid than clinical weighting, hen12680_ch11_538-578.indd 555 3/30/11 9:34 AM 556 Part Five Staffing Activities: Employment the combination of both methods may yield the highest validity. One study indicated that ­regression-­weighted predictors were more valid than clinical judgments, but clinical judgments contributed uniquely to performance controlling for ­regression-­weighted predictors. This suggests that both statistical and clinical weighting might be used. Thus, the weighting schemes are not necessarily mutually exclusive.20 Multiple Hurdles Model With a multiple hurdles approach, an applicant must earn a passing score on each D predictor before advancing in the selection process. Such an approach is taken when each requirement measured byAa predictor is critical to job success. Passing scores are set using the methods to determine cut scores (discussed in the next secI tion). Unlike the compensatory model, the multiple hurdles model does not allow L a high score on one predictor to compensate for a low score on another predictor. Many organizations use multiple Y hurdles selection systems to both reduce the cost of selecting applicants and make the ­decision-­making process more tractable in the final selection stage. It would ,be very inefficient to process all the possible information the organization might collect on a large number of candidates, so some candidates are screened out relatively early in the process. Typically, the first R stage of a selection process screens the applicant pool down to those who meet some minimal educational or years ofYexperience requirement. Collecting information on such requirements is fairly inexpensive for organizations and can usually A be readily quantified. After this stage, the pool of remaining applicants might be N inexpensive standardized tests to those further reduced by administering relatively who passed the initial screen. This will further reduce the pool of potential candidates, allowing the organization to devote 2 more resources to interviewing finalists and having them meet with managers at the organization’s headquarters. This is the 6 selection stage. There are many variations in how the multiple hurdles model can be implemented, and the exact nature of the “screen” vs. “select” measures will 7 vary based on the job requirements. 5 Combined Model B For jobs where some but not all requirements are critical to job success, a comU bined method may be used involving both the compensatory and the multiple hurdles models. The process starts with the multiple hurdles model and ends with the compensatory method. An example of the combined approach for the position of recruitment manager is shown in Exhibit 11.4. The selection process starts with two hurdles that applicants must pass in succession: the application blank and the job knowledge test. Failure to clear either hurdle results in rejection. Applicants who pass receive an interview and have their references checked. Information from the interview and hen12680_ch11_538-578.indd 556 3/30/11 9:34 AM Chapter Eleven Exhibit 11.4 Decision Making 557 Combined Model for Recruitment Manager Job offer Pass Pass Application blank Job knowledge test Fail Reject D A Pass I L Y Fail , Interview and references Fail Reject Reject R Y A N the references is combined in a compensatory manner. Those who pass are offered jobs, and those who do not pass are rejected. 2 6 Hiring Standards and Cut Scores 7 Hiring standards or cut scores address 5 the issue of what constitutes a passing score. The score may be a single score from a single predictor or a total score from mulB tiple predictors. To address this, a description of the process and the consequences Umethods that may be used to establish the actual of cut scores are presented. Then, cut score are described. These techniques include minimum standards, ­top-­down hiring, and banding. Description of the Process Once one or more predictors have been chosen for use, a decision must be made as to who advances in the selection process. This decision requires that one or more cut scores be established. A cut score is the score that separates those who hen12680_ch11_538-578.indd 557 3/30/11 9:34 AM 558 Part Five Staffing Activities: Employment advance in the process (e.g., applicants who become candidates) from those who are rejected. For example, assume a test is used on which scores may range from 0 to 100 points. A cut score of 70 means that those applicants with a 70 or higher would advance, while all others would be rejected for employment purposes. Consequences of Cut Scores Setting a cut score is a very important process, as it has consequences for the organization and the applicant. The consequences of cut scores are shown in Exhibit 11.5, which contains a summary D of a scatter diagram of predictor and criterion scores. The horizontal line shows the criterion score at which the organization A has determined whether an employee is successful or unsuccessful—for example, I scale where 1 is low performance and 5 is a 3 on a 5-point performance appraisal high performance. The vertical line isLthe cut score for the predictor—for example, a 3 on a 5-point interview rating scale where 1 reveals no chance of success and 5 Y a high chance of success. The consequences of setting the ,cut score at a particular level are shown in each of the quadrants. Quadrants A and C represent correct decisions, which have positive consequences for the organization. Quadrant A applicants are called true positives because they were assessedRas having a high chance of success using the predictor and would have succeeded Y if hired. Quadrant C applicants are called Exhibit 11.5 Consequences of Cut Scores Criterion Successful A N 2 Cut Score Predictor False negatives 6 7D 5 B U A True positives B C Unsuccessful True negatives False positives Predictor No hire hen12680_ch11_538-578.indd 558 Hire 3/30/11 9:34 AM Chapter Eleven Decision Making 559 true negatives because they were assessed as having little chance for success and, indeed, would not be successful if hired. Quadrants D and B represent incorrect decisions, which have negative consequences to the organization and affected applicants. Quadrant D applicants are called false negatives because they were assessed as not being likely to succeed, but had they been hired, they would have been successful. Not only was an incorrect decision reached, but a person who would have done well was not hired. Quadrant B applicants are called false positives. They were assessed as being likely to succeed, but would have ended up being unsuccessful performers. Eventually, D remedial training, be transferred to a new job, these people would need to receive or even be terminated. A How high or low a cut score is set has a large impact on the consequences shown I always involved. Compared with the moderate in Exhibit 11.5, and ­trade-­offs are L score results in fewer false positives but a larger cut score in the exhibit, a high cut number of false negatives. Is thisYa good, bad, or inconsequential set of outcomes for the organization? The answer depends on the job open for selection and the , costs involved. If the job is an astronaut position for NASA, it is essential that there be no false positives. The cost of a false positive may be the loss of human life. Now consider the consequences of a low cut score, relative to the one shown in Exhibit 11.5. There are fewerR false negatives and more true positives, but more Y false positives are hired. In organizations that gain competitive advantage in their industry by hiring the very best,Athis set of consequences may be unacceptable. Alternatively, for EEO/AA purposes it may be desirable to have a low cut score so Nfor minorities and women is minimized. that the number of false negatives In short, when setting a cut score, attention must be given to the consequences, as they can be very serious. As a result, different methods of setting cut scores have 2 makers. These will now be reviewed.21 been developed to guide decision 6 Methods to Determine Cut Scores 7 Three methods may be used to 5 determine cut scores: minimum competency, ­top-­down, and banding. Each of these is described below, along with professional B guidelines for setting cut scores. U Minimum Competency Using the minimum competency method, the cut score is set on the basis of the minimum qualifications deemed necessary to perform the job. SMEs usually establish the minimum competency score. This approach is often needed in situations where the first step in the hiring process is the demonstration of minimum skill requirements. Exhibit 11.6 illustrates the use of cut scores in selection. The scores of 25 applicants on a particular test are listed. Using the minimum competency method, the cut score is set at the level at which applicants who score below the line are deemed unqualified for the job. In this case, a score of 75 was determined hen12680_ch11_538-578.indd 559 3/30/11 9:34 AM 560 Part Five Staffing Activities: Employment Exhibit 11.6 Use of Cut Scores in Selection Decisions Rank Test Score 1 2 3 4 T5 T5 7 T8 T8 10. 11 T12 T12 14 15 16 T17 T17 19 20. 21 22 23. 24 25 100 98 97 96 93 93 91 90 90 88 87 85 85 83 81 79 77 77 76 75 74 71 70 69 65 Minimum Competency 100 98 97 96 93 93 91 90 90 88 87 85 85 83 81 79 77 77 76 75 74 71 70 69 65 - Qualified Top-Down D A I L Y , R Y A Min. competency N Unqualified 100 98 97 96 95 95 91 90 90 88 87 85 85 83 81 79 77 77 76 75 74 71 70 69 65 1st 2nd 3rd 4th 5th 5th 21st 22nd 23rd 24th 25th choice choice choice choice choice choice ’’ ’’ ’’ ’’ ’’ ’’ ’’ ’’ ’’ ’’ ’’ ’’ ’’ ’’ choice choice choice choice choice Banding* 100 98 97 96 93 93 91 90 90 88 87 85 85 83 81 79 77 77 76 75 74 71 70 69 65 - 2 6 7 *All scores within brackets are treated as equal; choice of applicants within brackets (if necessary) can be made on the basis of other factors, such as EEO/AA5considerations. B U to be the minimum competency level necessary. Thus, all applicants who scored below 75 are deemed unqualified and rejected, and all applicants who scored 75 or above are deemed at least minimally qualified. Finalists and ultimately offer receivers can then be chosen from among these qualified applicants on the basis of other criteria. A variation of the minimum competency approach is hiring the first acceptable candidate. It is often used when candidates come to the attention of the hiring person sequentially, one at a time, rather than having a total pool of candidates from which to choose the finalists. It is also used when the organization is desperate for hen12680_ch11_538-578.indd 560 3/30/11 9:34 AM Chapter Eleven Decision Making 561 “warm bodies” and willing to hire anyone who meets some threshold. Although at times a rush to hire is understandable, the consequences can be most unfortunate. In one case, due to the difficulty of finding telemarketers, a home mortgage call center had a policy of hiring the first acceptable candidate. The hiring manager at this call center overheard a newly hired employee tell a customer, “If I had a rate as high as yours, I’d slit my wrists, then climb to the top of the highest building and jump off.”22 So, while hiring the first acceptable candidate may seem necessary, it is far from an ideal hiring strategy and the costs may not be revealed until it is too late. D competency approach is to impose a sort of Another variant on the minimum maximum competency on overqualified applicants. The assumption here is that the A job will not be sufficiently rewarding, and the overqualified employee will quickly I quit. Evidence suggests that employees who perceive themselves to be overqualified for their jobs report lower L levels of job satisfaction and higher intentions to leave.23 This may be because individuals who feel that they are overqualified believe Y that they deserve a better job and that their present work does not sufficiently challenge them. Employers can use ,tactics like increasing employee empowerment to alleviate these feelings among overqualified employees and therefore allow the organization to retain individuals who have exceptional levels of skills.24 Managers R should exercise caution before automatically rejecting individuals who appear to Y are interested in a job for reasons unknown to be overqualified. Sometimes people the hiring manager. There are also A legal dangers, as many apparently overqualified applicants are over 40. As one manager said, “I think it’s a huge mistake not to N candidates. Certainly there are valid reasons to take a second look at overqualified reject some candidates, but it shouldn’t be a blanket response.”25 2 Top-Down Another method of determining 6 the level at which the cut score should be set is to simply examine the distribution7of predictor scores for applicants and set the cut score at the level that best meets the demands of the organization. These demands 5 might include the number of vacancies to be filled and EEO/AA requirements. This ­top-­down method of setting Bcut scores is illustrated in Exhibit 11.6. As the exhibit shows, under ­top-­down hiring, U cut scores are established by the number of applicants that need to be hired. Once that number has been determined, applicants are selected from the top based on the order of their scores until the number desired is reached. The advantage of this approach is that it is easy to administer. It also minimizes judgment required because the cut score is determined on the basis of the demand for labor. The big drawback is that validity is often not established prior to the use of the predictor. Also, there may be overreliance on the use of a single predictor and cut score, while other potentially useful predictors are ignored. A ­well-­known example of a ­top-­down method is the Angoff method.26 In this approach, SMEs set the minimum cut scores needed to proceed in the selection hen12680_ch11_538-578.indd 561 3/30/11 9:34 AM 562 Part Five Staffing Activities: Employment process. These experts go through the content of the predictor (e.g., test items) and determine which items the minimally qualified person should be able to pass. Usually 7–10 SMEs (e.g., job incumbents, managers) are used, and they must agree on the items to be passed. The cut score is the sum of the number of items that must be answered correctly. There are several problems with this particular approach and the subsequent modifications to it. First, it is a ­time-­consuming procedure. Second, the results are dependent on the SMEs. It is a very difficult matter to get members of the organization to agree on who “the” SMEs are. Which SMEs are selected may have a bearD Finally, it is unclear how much agreement ing on the actual cut scores developed. there must be among SMEs when they Aevaluate test items. There may also be judgmental errors and biases in how cut scores are set. If the Angoff method is used, it is important that SMEs are provided Iwith a common definition of minimally competent test takers, and that SMEs are L encouraged to discuss their estimates. Each of 27 these steps has been found to increase Y the reliability of the SME. Banding and Other Alternatives to, ­Top-­Down Selection The traditionally selected cut score method is the ­top-­down approach. For both external hiring and internal promotions, the ­top-­down method will yield the highR est validity and utility. This method has been criticized, however, for ignoring the Y possibility that small differences between scores are due to measurement error. Another criticism of the ­top-­down method is its potential for adverse impact, parA ticularly when cognitive ability tests are used. As we noted in Chapter 9, there is N than the fact that the single most valid perhaps no greater paradox in selection selection measure (cognitive ability tests) is also the measure with the most adverse impact. The magnitude of the adverse impact is such that, on a standard cognitive 2 ability test, if half the white applicants are hired, only 16% of the black applicants 28 6 would be expected to be hired. One suggestion for reducing the adverse impact of ­top-­down hiring is to use dif7 ferent norms for minority and majority groups; thus, hiring decisions are based on 5 normatively defined (rather than absolute) scores. For example, a black employee who achieved a score of 75 on a testBwhere the mean of all black applicants was 50 could be considered to have the same normative score as a white applicant who U scored a 90 on a test where the mean for white applicants was 60. However, this “race norming” of test scores, which was a common practice in the civil service and among some private employers, is expressly forbidden by the Civil Rights Act of 1991. As a result, another approach, termed “banding,” has been promulgated. Banding refers to the procedure whereby applicants who score within a certain score range or band are considered to have scored equivalently. A simple banding procedure is provided in Exhibit 11.6. In a 100-point test, all applicants who score within the band of 10-point increments are considered to have scored equally. For example, all applicants who score 91 and above could be assigned a score of 9, hen12680_ch11_538-578.indd 562 3/30/11 9:34 AM Chapter Eleven Decision Making 563 those who score 81–90 are given a score of 8, and so on. (In essence, this is what is done when letter grades are assigned based on exam scores.) Hiring within bands could then be done at random or, more typically, could be based on race or sex in conjunction with other factors (e.g., seniority or experience). Banding might reduce the adverse impact of selection tests because such a procedure tends to reduce differences between higher- and ­lower-­scoring groups (as is the case with whites and minorities on cognitive ability tests). In practice, band widths are usually calculated on the basis of the standard error of measurement. Research suggests that banding procedures result in substantial decreases in the D tests, while, under certain conditions, the losses adverse impact of cognitive ability in terms of utility are relatively small. A Various methods of banding have been proposed, but the differences between these methods are relatively unimportant.29 I banding is that it sacrifices validity, especially Perhaps the major limitation with L when the selection measure is reliable. Because the standard error of the difference between test scores is partly a function of the reliability of the test, when test reliY ability is low, band widths are wider than when the reliability of the test is high. For , is .80, at a reasonable level of confidence, nearly example, if the reliability of a test half the scores on a test can be considered equivalent.30 Obviously, taking scores on a 100-point test and lumping applicants into only two groups wastes a great deal of R (it is unlikely that an applicant who scores a important information on applicants Y same on the job as an applicant who scores 99). 51 on a valid test will perform the Therefore, if the reliability of a test A is even moderately high, the validity and utility decrements that result from banding become quite severe. There is also evidence that typical banding procedures N overestimate the width of bands, which of course exacerbates the problem.31 The scientific merit of test banding is hotly debated.32 It is unlikely that we 2 and technical issues underlying its use. Orgacould resolve here the myriad ethical nizations considering the use of6 banding in personnel selection decisions must weigh the pros and cons carefully, of lawsuits 7 including the legal issues (a review 33 concerning banding found that it was generally upheld by the courts ). In the end, however, there may be a values 5choice to be made: to optimize validity (to some detriment to diversity) or to optimize B diversity (with some sacrifice in validity). As one review noted, though “there is extensive evidence supporting the validU ity [of cognitive tests], adverse impact is unlikely to be eliminated as long as one assesses” cognitive abilities in the selection process.34 In an effort to resolve this somewhat pessimistic ­trade-­off, some researchers have developed nonlinear models that attempt to find optimal solutions that maximize validity and diversity. One effort produced a statistical algorithm that attempts to achieve an optimal ­trade-­off between validity and adverse impact by differentially weighting the selection measures. Although such algorithms may reduce the price to be paid in the values (between optimizing validity and diversity), the silver bullet solution remains elusive.35 hen12680_ch11_538-578.indd 563 3/30/11 9:34 AM 564 Part Five Staffing Activities: Employment Exhibit 11.7 Professional Guidelines for Setting Cutoff Scores 1. It is unrealistic to expect that there is a single ‘‘best’’ method of setting cutoff scores for all situations. 2. The process of setting a cutoff score (or a critical score) should begin with a job analysis that identifies relative levels of proficiency on critical knowledge, skills, abilities, or other characteristics. 3. The validity and job relatedness of the assessment procedure are crucial considerations. D 4. How a test is used (criterion-referenced or ­norm-­referenced) affects the selection A and meaning of a cutoff score. 5. When possible, data on the actual relationI of test scores to outcome measures of job performance should be considered carefully. L high enough to ensure that minimum 6. Cutoff scores or critical scores should be set standards of job performance are met. Y 7. Cutoff scores should be consistent with normal expectations of acceptable , proficiency within the workforce. Source: W. F. Cascio, R. A. Alexander, and G. V. Barrett,R“Setting Cutoff Scores: Legal, Psychometric, and Professional Issues and Guidelines,” Personnel Psychology, 1988, 41, pp. 21–22. Used with permission. Professional Guidelines Y A N Much more research is needed on systematic procedures that are effective in setting optimal cut scores. In the meantime, a sound set of professional guidelines for setting cut scores is shown in Exhibit211.7. 6 7 Methods of Final Choice 5 The discussion thus far has been on decision rules that can be used to narrow down B groups that advance in the selection process the list of people to successively smaller from applicant to candidate to finalist. UHow does the organization determine which finalists will receive job offers? Discretionary assessments about the finalists must be converted into final choice decisions. The methods of final choice are the mechanisms by which discretionary assessments are translated into job offer decisions. Methods of final choice include random selection, ranking, and grouping. Examples of each of these methods are shown in Exhibit 11.8 and are discussed here. Random Selection With random selection, each finalist has an equal chance of being selected. The only rationale for selecting a person is the “luck of the draw.” For example, the hen12680_ch11_538-578.indd 564 3/30/11 9:34 AM Chapter Eleven Exhibit 11.8 Decision Making 565 Methods of Final Choice Random Ranking Grouping Casey Keisha Buster Lyn Aung Meg Luis 1. Keisha 2. Meg 3. Buster 4. Lyn Aung 5. Casey D 6. Luis Keisha Meg ]Top choices Buster Lyn Aung ]Acceptable ]Last resorts Pick one A Casey Luis I L Y six names from Exhibit 11.8 could be put in a hat and the finalist drawn out and , tendered a job offer. This approach has the advantage of being quick. Also, with random selection, one cannot be accused of favoritism, because everyone has an equal chance of being selected. R The disadvantage to this approach is that discretionary assessments are simply ignored. Ranking Y A N from the most desirable to the least desirable With ranking, finalists are ordered based on results of discretionary assessments. As shown in Exhibit 11.8, the person ranked 1 (Keisha) is the most desirable, and the person ranked 6 (Luis) is the least desirable. It is important to note 2 that desirability should be viewed in the context of the entire selection process. When 6 this is done, persons with lower levels of desirability (e.g., ranks of 3, 4, and 5) should not be viewed necessarily as failures. Job 7 offers are extended to people on the basis of their rank ordering, with the person 5 Should that person turn down the job offer or ranked 1 receiving the first offer. suddenly withdraw from the selection B process, finalist number 2 receives the offer, and so on. U it indicates the relative worth of each finalist The advantage of ranking is that for the job. It also provides a set of backups should one or more of the finalists withdraw from the process. Backup finalists may decide to withdraw from the process to take a position elsewhere. Although ranking gives the organization a cushion if the top choices withdraw from the process, it does not mean that the process of job offers can proceed at a leisurely pace. Immediate action needs to be taken with the top choices in case they decide to withdraw and there is a need to go to backups. This is especially true in tight labor markets, where there is a strong demand for the services of people on the ranking list. hen12680_ch11_538-578.indd 565 3/30/11 9:34 AM 566 Part Five Staffing Activities: Employment Grouping With the grouping method, finalists are banded together into ­rank-­ordered categories. In Exhibit 11.8, the finalists are grouped according to whether they are top choices, acceptable, or last resorts. The advantage of this method is that it permits ties among finalists, thus avoiding the need to assign a different rank to each person. The disadvantage is that decisions still have to be made from among the top choices. These decisions might be based on factors such as probability of each person accepting the offer. D A In some organizations, the hiring process is continuous, meaning that there is never I Instead, an organization that has continuous a final list of candidates to be selected. needs for employees in a variety of L positions might continuously collect résumés from interested parties, and then when Y positions open up, call in for interviews everyone who passes the minimum qualifications for open jobs. In many ways, this , is like the method of hiring the first acceptable candidate, described under the min- Ongoing Hiring imum competency approach. Jobs with very high turnover rates, like ­entry-­level retail and food service positions, are typically staffed in this way. The advantage of an ongoing hiring method is that itRgenerates a large quantity of applicants who can start in a relatively short period of Ytime, which can be very important for organizations that frequently replace staff. A The disadvantage of this system is that it seldom allows for careful consideration of the best possible candidates from a set N of qualified applicants. Decision Makers 2 6 for selection is who should participate A final consideration in decision making in the decisions. That is, who should 7 determine the process to be followed (e.g., establishing cut scores), and who should determine the outcome (e.g., who gets the 5 professionals and line managers must play job offer)? The answer is that both HR a role. Although the two roles are different, both are critical to the organization. B Employees may play certain roles asU well. Human Resource Professionals As a general rule, HR professionals should have a high level of involvement in the processes used to design and manage the selection system. They should be consulted in matters such as which predictors to use and how to best use them. In particular, they need to orchestrate the development of policies and procedures in the staffing areas covered. These professionals have or know where to find the technical expertise needed to develop sound selection decisions. Also, they have the hen12680_ch11_538-578.indd 566 3/30/11 9:34 AM Chapter Eleven Decision Making 567 knowledge to ensure that relevant laws and regulations are being followed. Finally, they can also represent the interests and concerns of employees to management. Although the primary role HR professionals should play is in terms of the process, they should also have some involvement in determining who receives job offers. One obvious area where this is true is with staffing the HR function. A less obvious place where HR professionals can play an important secondary role is in terms of providing input into selection decisions made by managers. HR professionals may be able to provide some perception on applicants that is not always perceived by line managers. For example, they may be able to offer some insight on the applicants’D people skills (e.g., communications, teamwork). HR professionals are sensitive toAthese issues because of their training and experience. They may have data to share on these matters as a result of their screening interviews, knowledge of how toI interpret ­paper-­and-pencil instruments (e.g., perL internal candidates (e.g., serving on task forces sonality test), and interactions with with the candidates). Y The other area where HR professionals may contribute to outcomes is in terms , of initial assessment methods. Many times, HR professionals are, and should be, empowered to make initial selection decisions, such as who gets invited into the organization for administration of the next round of selection. Doing so saves managers time in which to carry outR their other responsibilities. Also, HR professionY women applicants are actively solicited and not als can ensure that minorities and excluded from the applicant poolAfor the wrong reasons. Managers N As a general rule, a manager’s primary involvement in staffing is in determining who is selected for employment.2Managers are the SMEs of the business, and they are thus held accountable for the 6 success of the people hired. They are far less involved in determining the processes followed to staff the organization, because they often do not have the time or7expertise to do so. The average manager can also be expected to have no knowledge 5 of staffing research whatsoever, though that doesn’t mean he or she is uninterested in learning.36 B Although they may not play a direct role in establishing process, managers can U by HR professionals on process issues. They and should be periodically consulted should be consulted because they are the consumers of HR services. As such, they should be allowed to provide input into the staffing process to ensure that it is meeting their needs in making the best possible person/job matches. An additional benefit of allowing management a role in process issues is that as a result of their involvement, managers may develop a better understanding of why HR professionals prescribe certain practices. When they are not invited to be part of the process to establish staffing policy and procedures, line managers may view HR professionals as obstacles to hiring the right person for the job. hen12680_ch11_538-578.indd 567 3/30/11 9:34 AM 568 Part Five Staffing Activities: Employment It should also be noted that the degree of managers’ involvement usually depends on the type of assessment decisions made. Decisions made using initial assessment methods are usually delegated to the HR professional, as just discussed. Decisions made using substantive assessment methods usually involve some degree of input from the manager. Decisions made using discretionary methods are usually the direct responsibility of the manager. As a general rule, the extent of managerial involvement in determining outcomes should only be as great as management’s knowledge of the job. If managers are involved in hiring decisions for jobs with which they are not familiar, legal, measurement, and morale problems are likely D to be created. Employees A I Traditionally, employees have not been considered part of the ­decision-­making L process in staffing. But this tradition is slowly changing. For example, in team assessment approaches (see ChapterY 8), employees may have a voice in both process and outcomes. That is, they may , have ideas about how selection procedures are established and make decisions about, or provide input into, who gets hired. Involvement in the team approach is encouraged because it may give employees a sense of ownership of the work process R and help them better identify with organizational goals. Also, it may result in selecting members who are more compatible Y with the goals of the work team. Google includes line managers and peers in the A process is seen as a valuable way to get hiring process. Its ­consensus-­based hiring a variety of perspectives on the fit between applicants and the organization.37 In N order for employee involvement to be effective, employees need to be provided with staffing training just as managers are (see Chapter 9). 2 6 Legal Issues 7 One of the most important legal issues in decision making is that of cut scores or 5 hiring standards. These scores or standards regulate the flow of individuals from applicant to candidate to finalist to new hire. Throughout this flow, adverse impact B may occur. When it does, the UGESP come into play. In addition, the organization U increasing workforce diversity. could form a multipronged strategy for Uniform Guidelines on Employee Selection Procedures If the use of cut scores does not lead to adverse impact in decision making, the UGESP are essentially silent on the issue of cut scores. The discretion exercised by the organization as it makes its selection decisions is thus unconstrained legally. If adverse impact is occurring, the UGESP become directly applicable to decision making. hen12680_ch11_538-578.indd 568 3/30/11 9:34 AM Chapter Eleven Decision Making 569 Under conditions of adverse impact, the UGESP require the organization to either eliminate its occurrence or justify it through the conduct of validity studies and the careful setting of cut scores: Where cutoff scores are used, they should normally be set as to be reasonable and consistent with normal expectations of acceptable proficiency within the workforce. Where applicants are ranked on the basis of properly validated selection procedures and those applicants scoring below a higher cutoff score than appropriate in light of such expectations have little or no chance of being selected for employment, the higher cutoff score may be appropriate, but the D degree of adverse impact should be considered. A This provision suggests that the organization should be cautious in general about setting cut scores that are above Ithose necessary to achieve acceptable proficiency among those hired. In other words, L even with a valid predictor, the organization should be cautious that its hiring standards are not so high that they create needY less adverse impact. This is particularly true with ranking systems. Use of random , methods—or to a lesser extent, grouping methods—would help overcome this particular objection to ranking systems. Whatever cut score procedure is used, the UGESP also require that the organizaR tion be able to document its establishment and operation. Specifically, the UGESP say that “if the selection procedure Y is used with a cutoff score, the user should describe the way in which normal expectations of proficiency within the workforce A were determined and the way in which the cutoff score was determined.” The UGESP also suggest twoNoptions to eliminate adverse impact, rather than to justify it as in the validation and cut score approach. One option is use of “alternative procedures.” Here, the organization uses an alternative selection procedure 2 work sample instead of a written test) but has that causes less adverse impact (e.g., roughly the same validity as the 6 procedure it replaces. The other option is that of affirmative action. The UGESP do not relieve the orga7 nization of any affirmative action obligations it may have. Also, the UGESP strive to “encourage the adoption and5implementation of voluntary affirmative action programs” for organizations thatBdo not have any affirmative action obligations. Diversity and Hiring Decisions U There has been considerable controversy and litigation over the issue of whether it is permissible for a legally protected characteristic such as race or gender to enter into a staffing decision at all, and if so, under exactly what circumstances. At the crux of the matter is whether staffing decisions should be based solely on a person’s qualifications or on qualifications and the protected characteristic. It is argued that allowing the protected characteristic to receive some weight in the hen12680_ch11_538-578.indd 569 3/30/11 9:34 AM 570 Part Five Staffing Activities: Employment decision would serve to create a more diverse workforce, which many public and private organizations claim is something they have a compelling interest in and responsibility to do (refer back to Chapter 3 and the discussion of affirmative action). It can be concluded that unless the organization is under a formal affirmative action plan (AAP), protected characteristics (e.g., race, sex, and religion) should not be considered in selection decision making. This conclusion is consistent with Equal Employment Opportunity Commission (EEOC) policy that the organization should use ­job-­related hiring standards and that the same selection techniques and D weights must be used for all people.38 How should the organization proceed, A especially if it wants to not only comply with the law but also increase the diversity of its workforce? Several things I might be done. First, carefully establish KSAOs (knowledge, skill, ability, and L they are truly job related; as part of that other characteristics) for jobs so that process, attempt to establish some Y ­job-­related KSAOs that may correlate with protected characteristics, such as diversity in experience and customer contacts. , marketing manager might be “substantial For example, a KSAO for the job of contacts within diverse racial and ethnic communities.” Both white and nonwhite applicants could potentially meet this requirement, increasing the chances of recruiting and selecting a person ofRcolor for the job. Second, use recruitment Yfor attracting a more qualified and diverse (both external and internal) as a tool applicant pool. Third, use valid methods of KSAO assessment derived from a A formal selection plan. Fourth, avoid clinical or excessively subjective prediction N a total assessment or score for candidates. in making the assessment and deriving Instead, establish and use the same set of predictors and weights for them to arrive at the final assessment. Fifth, provide training in selection decision making 2 for hiring managers and staffing managers. Content of the training should focus 6 learning how to gather and weight predicon overcoming usage of stereotypes, tor information consistently for all 7candidates, and looking for red flags about acceptance or rejection based on vague judgments about the candidate being a 5 of hiring and staffing managers to gather “good fit.” Sixth, use a diverse group and evaluate KSAO information, including a diverse team to conduct interviews. B Finally, monitor selection decision making and challenge those decision makers U who reject candidates who would enhance diversity to demonstrate that the reasons for rejection are job related. When the organization is under an AAP, either voluntary or court imposed, the above recommendations are still appropriate. Attempts to go even further and provide a specific “plus” to protected characteristics should not be undertaken without a careful examination and opinion of whether this would be legally permissible. hen12680_ch11_538-578.indd 570 3/30/11 9:34 AM Chapter Eleven Decision Making 571 Summary The selection component of a staffing system requires that decisions be made in several areas. The critical concerns are deciding which predictors (assessment methods) to use, determining assessment scores and setting cut scores, making final decisions about applicants, considering who within the organization should help make selection decisions, and complying with legal guidance. In deciding which assessment methods to use, consideration should be given to the validity coefficient, face validity correlation with other predictors, adverse D impact, utility, and applicant reactions. Ideally, a predictor would have a validity coefficient with large magnitudeAand significance, high face validity, low correlations with other predictors, little adverse impact, and high utility. In practice, this ideal situation is hard to achieve,I so decisions about t­rade-­offs are necessary. How assessment scores are determined depends on whether a single predictor L or multiple predictors are used. In Y the case of a single predictor, assessment scores are simply the scores on the predictor. With multiple predictors, a compensatory, , must be used. A compensatory model allows multiple hurdles, or combined model a person to compensate for a low score on one predictor with a high score on another predictor. A multiple hurdles model requires that a person achieve a passR ing score on each successive predictor. A combined model uses elements of both the compensatory and multiple hurdles models. Y In deciding who earns a passing A score on a predictor or a combination of predictors, cut scores must be set. When doing so, the consequences of setting different N levels of cut scores should be considered, especially those of assessing some applicants as false positives or false negatives. Approaches to determining cut scores include minimum competency, ­top-­down, and banding methods. Professional 2 best to set cut scores. guidelines were reviewed on how Methods of final choice involve 6 determining who will receive job offers from among those who have passed the 7 initial hurdles. Several methods of making these decisions were reviewed, including random selection, ranking, and grouping. Each 5 has advantages and disadvantages. Multiple individuals may be involved in selection decision making. HR profesB sionals play a role primarily in determining the selection process to be used and U on initial assessment results. Managers play a in making selection decisions based role primarily in deciding whom to select during the final choice stage. Employees are becoming part of the ­decision-­making process, especially in team assessment approaches. A basic legal issue is conformance with the UGESP, which provide guidance on how to set cut scores in ways that help minimize adverse impact and allow the organization to fulfill its EEO/AA obligations. In the absence of an AAP, protected hen12680_ch11_538-578.indd 571 3/30/11 9:34 AM 572 Part Five Staffing Activities: Employment class characteristics must not enter into selection decision making. That prohibition notwithstanding, organizations can take numerous steps to increase workforce diversity. Discussion Questions 1. Your boss is considering using a new predictor. The base rate is high, the selection ratio is low, and the validity coefficient is high for the current predictor. What would you advise D your boss and why? 2. What are the positive consequences associated with a high predictor cut A score? What are the negative consequences? I 3. Under what circumstances should a compensatory model be used? When L be used? should a multiple hurdles model 4. What are the advantages of ranking Y as a method of final choice over random selection? , 5. What roles should HR professionals play in staffing decisions? Why? 6. What guidelines do the UGESP offer to organizations when it comes to setting cut scores? R Y A Ethical Issues N 1. Do you think companies should use banding in selection decisions? Defend your position. 2. Is clinical prediction the fairest 2 way to combine assessment information about job applicants, or are the other methods (unit weighting, rational weighting, 6 and multiple regression) fairer? Why? 7 5 Applications B Utility Concerns in Choosing an Assessment U Method Randy May is a 32-year-old airplane mechanic for a small airline based on Nantucket Island, Massachusetts. Recently, Randy won $2 million in the New England lottery. Because Randy is relatively young, he decided to invest his winnings in a business to create a future stream of earnings. After weighing many investment options, Randy chose to open up a chain of ice cream shops in the Cape Cod area. (As it turns out, Cape Cod and the nearby islands are short of ice cream shops.) Based on his own budgeting, Randy figured he had enough cash to open shops on each of the two islands (Nantucket and Martha’s Vineyard) and two shops in small hen12680_ch11_538-578.indd 572 3/30/11 9:34 AM Chapter Eleven Decision Making 573 towns on the Cape (Falmouth and Buzzards Bay). Randy contracted with a local builder and the construction/renovation of the four shops is well under way. The task that is occupying Randy’s attention now is how to staff the shops. Two weeks ago, he placed advertisements in three area newspapers. So far, he has received 100 applications. Randy has done some informal HR planning and figures he needs to hire 50 employees to staff the four shops. Being a novice at this, Randy is unsure how to select the 50 people he needs to hire. Randy consulted his friend Mary, who owns the lunch counter at the airport. Mary told Randy that she used interviews to get “the most knowledgeable people possible” and recommended it to Randy because her peopleDhad “generally worked out well.” While Randy greatly respected Mary’s advice, A on reflection several questions came to mind. Does Mary’s use of the interview mean that it meets Randy’s requirements? How could Randy determine whetherI his chosen method of selecting employees was L effective or ineffective? Confused, Randy also soughtY the advice of Professor Ray Higgins, from whom Randy took an HR management course while getting his business degree. After , learning of the situation and offering his consulting services, Professor Higgins suggested that Randy choose one of two selection methods (after paying Professor Higgins’s consulting fees, he cannot afford to use both methods). The two methods Rare the interview (as Mary recommended) and Professor Higgins recommended a work sample test that entails Y scooping ice cream and serving it to a customer. Randy estimates that it would cost A $100 to interview an applicant and $150 per applicant to administer the work sample. Professor Higgins told Randy that the Nwhile the validity of the work sample is r = .50. validity of the interview is r = .30 Professor Higgins also informed Randy that if the selection ratio is .50, the average score on the selection measure of those applicants selected is z = .80 (.80 standard 2 plans to offer employees a wage of $6 per hour. deviations above the mean). Randy (Over the course of a year, this would 6 amount to a $12,000 salary.) Based on the information presented above, Randy would really appreciate it if 7 you could help him answer the following questions: 5 1. How much money would Randy save using each selection method? 2. If Randy can use only one B method, which should he use? 3. If the number of applicantsU increases to 200 (more applications are coming in every day), how will your answers to questions 1 and 2 change? 4. What limitations are inherent in the estimates you have made? Choosing Entrants Into a Management Training Program Come As You Are, a convenience store chain headquartered in Fayetteville, Arkansas, has developed an assessment program to promote nonexempt employees into its management training program. The minimum entrance requirements for the hen12680_ch11_538-578.indd 573 3/30/11 9:34 AM 574 Staffing Activities: Employment Part Five program are five years of company experience, a college degree from an accredited university, and a minimum acceptable job performance rating (3 or higher on a 1–5 scale). Anyone interested in applying for the management program can enroll in the ­half-­day assessment program, where the following assessments are made: 1. 2. 3. 4. 5. Cognitive ability test Integrity test Signed permission for background test Brief (30-minute) interview by various members of the management team D Drug test A At the Hot Springs store, 11 employees have applied for openings in the management training program. The selection I information on the candidates is provided in the exhibit on the next page. (The scoring key is provided at the bottom of the L exhibit.) It is estimated that three slots in the program are available for qualified Y candidates from the Hot Springs location. Given this information and what you know about external and internal selection, as well as staffing decision making, , answer the following questions: 1. How would you go about deciding whom to select for the openings? In other words, without providingRyour decisions for the individual candidates, describe how you would weighYthe various selection information to reach a decision. A 2. Using the ­decision-­making process from the previous question, which three applicants would you select forN the training program? Explain your decision. 3. Although the data provided in the exhibit reveal that all selection measures were given to all 11 candidates, would you advise Come As You Are 2 to continue to administer all the predictors at one time during the ­half-­day 6 the predictors be given in a sequence so assessment program? Or, should that a multiple hurdles or combined 7 approach could be used? Explain your recommendation. Tanglewood Stores Case 5 B U The cases you have considered up to this point have involved making aggregated decisions about a large number of applicants. After gathering relevant information, there is still the important task of determining how to combine this information to arrive at a set of candidates. This case combines several concepts from the chapters on selection and decision making. hen12680_ch11_538-578.indd 574 3/30/11 9:34 AM Name Radhu Merv Marianne Helmut Siobhan Galina Raul Frank Osvaldo Byron Aletha Scale Company Experience 4 12 9 5 14 7 6 9 10 18 11 Years hen12680_ch11_538-578.indd 575 College Degree Yes Yes Yes Yes Yes No Yes Yes Yes Yes Yes Yes–No Performance Rating 4 3 4 4 5 3 4 5 4 3 4 1–5 Cognitive Ability Test 9 3 8 5 7 3 7 2 10 3 7 1–10 Integrity Test 6 6 5 5 8 4 8 5 9 7 6 1–10 Background Test OK OK Arrest ’95 OK OK OK OK OK OK OK OK OK–Other Predictor Scores for 11 Applicants to Management Training Program EXHIBIT Interview Rating 6 8 4 4 8 6 2 7 3 6 5 1–10 Drug Test P P P P P P P P P P P P–F Chapter Eleven Decision Making 575 D A I L Y , R Y A N 2 6 7 5 B U 3/30/11 9:34 AM 576 Part Five Staffing Activities: Employment The Situation Tanglewood is faced with a situation in which 11 qualified applicants have advanced to the candidate stage for the job of store manager. These individuals have submitted résumés, completed several standardized tests similar to those described in the case on measurement, and engaged in initial interviews. There is considerable debate among the regional management staff that is responsible for selecting store managers about which of these candidates are best qualified to make it to the finalist stage, and they have asked you to help them reach a more informed decision. Your Tasks D A You will select the top candidates for the finalist pool by using various combinations of the predictors. The methods Ifor combining predictors will include clinical prediction, unit weighting, rational L weighting, and a multiple hurdles model. In your answers you will provide detailed descriptions of how you made decisions Y and also assess how comfortable you are with the results. You will also describe , cut scores for each of the predictors. The what you think are appropriate minimal background information for this case, and your specific assignment, can be found at www.mhhe.com/heneman7e. R Y Endnotes A 1. F. L. Schmidt and J. E. Hunter, “Moderator Research and the Law of Small Numbers,” Personnel N Psychology, 1978, 31, pp. 215–232. 2. J. Cohen, “The Earth Is Round (p
Purchase answer to see full attachment
User generated content is uploaded by users for the purposes of learning and should be used following Studypool's honor code & terms of service.

Explanation & Answer

Attached.

Running Head: STAFFING DISCUSSION

1

Human Resource Management
Staffing Discussion
Name
Instructor
Institutional Affiliation
Date

STAFFING DISCUSSION

2

1. Describe some of the most common pitfalls in using metrics for evaluating the
effectiveness of a selection system.
The recruitment and selection process is integral to developing and established
organizations. Typically, recruitment refers to the process of searching for potential qualified and
skilled candidates to fill job vacancies within the company. Attracting and retaining top talents
highly result in an increase in the competitive edge of a company while ineffective recruitment
and selection lead to significant disruption, low productivity, and long-term costs for the
company (Armstrong & Taylor, 2014). Once top talents have been identified via the recruitment
process, the suitable candidates for the job (s) are selected through the selection process. The
primary goal of the selection process is to guarantee that the best candidates are hired for the task
using equitable, fair, and efficient assessment approaches.
To assess the effectiveness of the recruitment and selection system, organizations often
use a range of metrics. These metrics can be categorized into two factors; efficiency and speed
measures such as aging of requisitions and HR manager feedback timeliness and quality metrics.
However, there are some drawbacks associated with using metrics for evaluating the
effectiveness of a selection system (Aswathappa, 2013). One of the common limitations is that
most metrics are subjective in nature. In this case, since the suitability of a candidate cannot be
easily assessed, the human resources provide a numerical score that depicts the candidate’s
perceived suitability. However, determining whether or not the metric is ‘correct and accurate’ is
an aspect of interpretation (Armstrong & Taylor, 2014). Instances of biases and preferences also
pose as a weakness for using metrics for assessing the effectiveness of the selection system.
Since these metrics are designed and implemented by the human resources personnel, the
possibility of them being biased in the selection process is high.

STAFFING DISCUSSION

3

2. What are the four main stages in developing effective policies and procedures? Why is
each stage important?
I. Consultation
The first stage in developing effective procedures and policies is consultation. When
creating pol...


Anonymous
I was struggling with this subject, and this helped me a ton!

Studypool
4.7
Trustpilot
4.5
Sitejabber
4.4

Similar Content

Related Tags