Multiattribute Utility Theory

Anonymous
timer Asked: Dec 17th, 2017
account_balance_wallet $45

Question description

I need help in summarizing the multiattribute utility theory paper in the attachment and the graph. Also to come uup with a new recommended approach.

Utility 1.0 0.5 Attribute level 0 Figure 1 - “Univariate utility-curve” 1 U2 GUF = αU1 + (1-α)U2 GUF = U1 + 0 . U2 Efficient frontier of decision maker 1 Efficient frontier of decision maker 2 U1 Figure 2 - Group-utililty-function characterization U2 GUF = βU1 + (1-β)U2 Efficient frontier of decision maker 2 Efficient frontier of decision maker 1 U1 Figure 3 - Group-utility-function characterization with proxy decision-maker efficient-frontier political states overall space capability National economy Mission cuticality Environment space/ground ratio Mission impact Maturity of mission Economic commitment Economic impact Launch priority Mission performance status Contribution to mission Level of Technology Expected remaining lifetime Satellite Utililty cost/domestic commitment Satellite Status Figure 4 - Model Criteria Figure 5 - Strategic equivalence 1.0 U A= U M= cost/domestic commitment 0.8 U 0 .7 U 0.7 A= 0.6 0.6 M= 0.6 U U A= M= 0.4 0.5 U U M= A= 0. 4 M= 0.4 U 0.2 0.5 .3 =0 UA 0.3 0 0.2 0.4 Satellite Status 0.6 0.8 1.0 Legend UA : Additive form UM : Multiplicative form
A Multi-ATTRIBUTE-UTILITY-THEORY MODEL THAT MINIMIZES INTERVIEW-DATA REQUIREMENTS: A CONSOLIDATION OF SPACE LAUNCH DECISIONS Raymond W. Staats First Space Launch Squadron Cape Carnaveral Air Force Station, Florida Yupo Chan1 Air Force Institute of Technology 1 Correspondent can be reached at Department of Operational Sciences (ENS), 2950 P Street, Wright-Patterson AFB, OH 45433. ABSTRACT We use multi-attribute utility theory (MAUT) to define a mathematical representation of a decision makers utility. By eliminating the use of a lottery questions, the survey is simpler to administer. The calibrated utility function is then used in a multi criteria optimization model for scheduling purposes. study is conducted in which two different decision A case makers' preferences are combined to characterize a group utility function. Again, a simple procedure is proposed to arrive at a group utility function. Keywords: Univariate utility curves, independence properties, ratio versus interval scales, group utility function. 1 I. Introduction One of the limitations in applying multi-attribute utility theory to actual decision problems lies in a survey process. survey required to capture the decision maker's The preference structure is exceptionally complex and contains questions and methodologies which are very difficult for the interviewee to understand. In particular, the use of lottery questions which is a cornerstone of MAUT is quite cumbersome. Typically, the decision maker is given to outcomes, A and B, such that the probability that A occurs is p and the probability that B occurs is 1-p. decision maker is then asked to specify C such that The he is indifferent between obtaining C with certainty and the outcome of the lottery. Problems with this method quickly becomes apparent when working with a decision maker who is not familiar with MAUT. A great deal of time must be spent by the interviewer in an attempt to make the decision maker feel comfortable with the questions being asked. Oftentimes, the interviewer must reveal the axioms of probability and thoroughly introduce a lottery concept to the decision maker. As a result, the survey takes a great deal of time to complete and too often the decision maker never completely understands the question he is answering. decision maker developed. loses confidence in the As a result, the model that is being A simpler method is needed to make the survey simpler and shorter, and hence make MAUT a better accepted technique. 2 In many decision making situations, a hierarchy is required to refine each criterion into subcriteria. In the case study to follow, an example can be found where the status of a satellite can be refined into its mission performance status, its contribution to the mission, its level of technology, and its expected remaining lifetime. Seo and Sakawa (1988) refer to this method as the "nesting of preferences." Where subcriteria are defined, pairwise comparison -- a cornerstone of MAUT -- are done only between subcriteria within the same group. A multivariate utility function is formulated for each group of subcriteria under a criterion and are called the criterion functions. Pairwise comparisons are then conducted for the second tier of criteria, just as they were for the subcriteria. Seo and Sakawa show that MAUT techniques are equally applicable to a tiered model. The nesting approach is advantageous as it allows us to work with a model that has many criteria without becoming overburdened with pairwise comparisons. Aggregating individual utility functions into a group utility function has always been the last frontier for MAUT. It will be shown that through the use of sensitivity analysis of multicriteria optimization. This will be demonstrated by a two decision maker example in which we are able to state the conditions under which one individual decision maker's decision will be guaranteed to prevail in a group environment. We are able to state these conditions without determining the efficient frontier of the second 3 decision maker, and without specifying the form of the group utility function. II. Background The use of fractile of bracketing method to determine the shape of the univariate utility function is cumbersome in as much as it hinges around the lottery method. Kirkwood (1994) offers a simplifying assumption that alleviates the problem. extensive empirical research which concludes that He cites univariate utility functions are well approximated using an exponential form. Hence, the decision maker needs only to indicate one point from which the constant to the exponential function, called the risk attitude constant, is derived. The univariate utility curve is then defined for the entire range of the attribute. Verifying the independence of attributes is another important step of MAUT. When there is a large number of criteria, the survey again gets very impractical to conduct. Let us examine the case of a decision problem with five attributes to show preferential independence, preferences shown over a pair attributes compared with a third attribute at a fixed value, and then the comparisons are repeated as a level of the third attribute is varied across its range. With comparisons. five attributes, this requires thirty sets of To verify mutual utility independence, twenty more pairwise comparisons must be made. Clearly, the survey quickly becomes too burdensome for a decision maker to complete within a 4 reasonable amount of time. Keeney and Raiffa (1976) come to our rescue by stating the following theorem: Given attributes equivalent: x1, x2, ... xn, the following are (a) attributes x1, x2, ... xn, are mutually utility independent, and (b) x1 is utility independent and { x1, xi is preferentially independent for i = 1, 2, 3, ..., n for n $ 3} This immediately eliminates fifteen pairwise comparisons from the survey. In addition, if we carefully define our criteria, we can reasonably thereby make the assumption eliminating forbidding portion thirty more of preferential sets of independence, comparisons. Now a of the survey has been reduced to a manageable size. Traditionally, in MAUT criterion weights are determined using lottery questions. As stated before, this methodology is often confusing to the decision maker. achieve consistent responses. Simplification is necessary to Seo and Sakawa (1988) suggest a method to break this process down into smaller, more manageable steps. First, we ask the decision maker to rank the attributes in descending order of importance, which is normally a fairly easy task. Next, we assess relative weights. Using one attribute as the base, we can examine tradeoffs between the base attribute and the other attributes. one ranked the highest. A good choice of the base attribute is the We then ask the question, "How much of the base attribute can be given up to gain an additional unit of another attribute?" In this manner, we collect information on the 5 preference intensities between the attributes. Consistency can be checked by using a different attribute as the base and re-asking the same questions. Finally, the weight of our base attribute must be determined. Here we substitute the swing weight method proposed by Clemens (1991) for the traditional lottery question. In this method, we start with all attributes at their worst level (the worst possible alternative) assigning this hypothetical alternative a utility of 0. Next, we "swing" the base attribute to its best possible level, and ask the decision maker to assign a utility that describes his/her assessment of such an alternative. The utility thus assigned can be mathematically shown to be the weight of the base attribute. Together with the relative weights already determined, we now can derive all the attribute weights. The key benefit of this methodology is that we have completely eliminated the use of lottery questions. Notice that we have combined the ratio scale with interval scale in the above proposal, where ratio scale is used to compare between criteria and interval scale used to score alternatives. This combination is not without precedent. Seo and Sakawa specify this combination in their approach to measuring utility functions. Marvin and Hutchinson (1994) also reported success in using this methodology. The method proposed takes advantage of both scales to measure the criterion weights. A ratio scale measurement requires an explicit (or at least an implicit) zero point. The swing method specifies the zero point as a base when the multivariate utility does not increase when the criterion is varied from its low value 6 to its high value. That is, the criterion weight is zero. Therefore, it is valid to express one criterion weight as a ratio to another criterion weight. Once all weights are expressed as ratios to one another, the swing weight experiment only needs to be performed once to place the weights on an interval scale. The advantage is that ratio comparisons are easier to obtain from the decision maker than swing weights. Once we have examined the underlying theory required to make our model plausible, we need to turn our attention to the decision maker. Since every individual has a uniquely different preference structure, with whom we select to conduct our interviews is very important. Most major decisions are made by a group of decision makers, rather than by an individual. But MAUT provides the framework for deriving at individuals utility function, but not a groups. This is often identified as the limitation of the state of the art in this field. One approach is to aggregate an individual utility function into a group utility function. the form of the aggregate function be? But what should de Neufville (1990) indicates that finding an appropriate form to represent a group utility function is satisfactory results. problematic and usually does not yield Seo and Sakawa show that under certain conditions an edited form of a group utility function may be appropriate via their "representation theorem for a group utility function" however. They suggest two methods to determine the weighing factors for each decision maker, the "benevolent dictator" approach and the "collective response" approach. In the former, 7 the weights are specified by a knowledgeable individual. This approach is trivial in its application but often unsatisfactory in its results. interpersonal The latter comparison of approach requires preferences and an an extensive interpersonal comparison of differences. Another approach to model the groups choices is that of an individual. functional This form eliminates since we the used problem the MAUT of determining process. The the well publicized "Arrows paradox" finds fault with this approach, in as much as a series of individually expressed preferences is shown to be intransitive. Keeney and Raiffa (1976) suggest that in deciding whether to use an individual as the decision maker or a group, we need to step back and examine the purpose of the study. Are we trying to describe the decision process, or prescribe what decision should be made? They propose that a unitary decision maker is appropriate for the prescriptive approach -- i.e., one is assessing what solution the decision maker should propose. In this approach, we can incorporate into the model the decision makers perceived notions about what others might do (i.e., the environment), as part of the uncertainties he faces. political When faced with a limited resource, oftentimes we use a multi criterion optimization model to allocate resources in accordance with the calibrated utility functions. Such an approach arrives at an optimal a decision allocation. representing particular way of resource Using vector sensitivity analysis, we can then define some limits for weighing factors that will effect the optimum 8 decision. Wendell (1985) outlines a "tolerance" approach that determines how much each objective function simultaneously and independently vary. function coefficient alternative, we can coefficient can Suppose the objective represents utilities of a analytically determine how particular the resulting variance of the original objective function coefficients from their original values effects the optimal solution for prioritizing alternatives. This provides a way of incorporating a second decision makers preferences and in an indirect way arrives at a group utility function. More will be said about this important subject in sequel. III. The Survey Here in this section we will provide the details of the survey procedure. mind: The proposed procedure was designed with four goals in clarity, simplicity, brevity, and consistency. is designed to be used in a face-to-face interview. The survey The analyst provides initial background information, guides the decision maker through the questions, and records responses. As de Neufville points out, an experienced analyst is important to this process. The analyst should ideally conduct a few practice sessions with trial decision makers before conducting the survey with the actual decision maker. The decision maker must be gradually introduced to the concepts of utility theory. Clemens very neatly laid out a set of "axioms of expected utility" that is useful in accomplishing this. Furthermore, the decision maker must be reminded that there 9 are no "right or wrong" answers. Remember the goal of the survey is to capture the decision makers preference structure. The survey is designed to take no more than a definable amount of time to complete. (In our case study to follow, this is limited to no more than two hours.) An exhausted decision maker is unlikely to give reliable or consistent responses. An experienced analyst or interviewer can survey completion time at the second and subsequent interviews, as both the analyst and the decision maker become more familiar with the process. achieved by using a written survey. Finally, consistency is As Clemens astutely points out, how questions are posed can greatly influence the answers given. By using a written survey, we are assured that individual decision makers interviewed are given identical survey instruments. The first section of the survey maps out the subcriterion utility functions, where a subcriterion is defined as a lower tier than a regular criterion. Subcriteria often arise in a complex problem where each criterion needs to be broken down further into components. The decision maker is given some information and answers a question concerning each subcriterion. subcriterion is defined. First, the Each subcriterion is scaled from zero to one, and discrete levels of the subcriterion between these points are defined. Next, the decision maker is told that the lowest level of the attribute is assigned a minimum utility of zero, or the highest level of the attribute is assigned a maximum utility of one. That is 10 (1) The decision maker is then asked to define an attribute level such that he/she feels has a utility of 0.5. (2) These three points allow the univariate utility function to be drawn, using the method proposed by Kirkwood as discussed previously. An example of such a univariate utility curve is shown in Figure 1. The mathematical representation of this univariate utility function takes the form: (3) where X is between 0 and 1 (including the end points) and r is the risk attitude constant (r 0). r is determined from the 0.5 utility point specified by the decision maker. The risk attitude constant is available using a "0.5 utility point versus RAC" lookup table that was created and documented in Kirkwood (1994). If the decision maker indicates that the utility midpoint (0.5) occurs at the attribute midpoint, then the univariate utility function is linear, and r = 0. In this case, the mathematical expression of the utility function is simply (4) 11 Session two of the survey verifies independence properties. recommended that preferential independence be assumed. It is As Keeney points out, preferential independence is a reasonable assumption for most multi-attribute decision models and cases were it does not hold are fairly rare. Intuitively, most analysts select criteria that do not replicate one another and in so doing preferential independence is usually achieved. A single subcriterion is then chosen as a basis for comparison. We ask a series of questions to determine whether this subcriterion is utility independent of each of the other twelve subcriterion. The decision maker is asked whether the utility midpoint chosen in section one of the survey for the base subcriterion is effected by changing the level of any of the other subcriteria. If it is not, then the base of criterion is utility independent of the other subcriterion. Here we make use of Keeney's weaker conditions for utility independence as described previously. Hence, mutual utility independence is verified. The next survey section determines multivarieate criterion utility functions. respective criteria. Here the subcriteria are grouped in their The decision maker is asked the rank, in descending order of importance, the subcriterion in each group. Once the subcriteria are rank ordered, the decision maker is asked to indicate the relative importance. The next step is perhaps the most difficult section of the survey. The subcriteria are again grouped into their respective criteria, and then for each group the following definitions are given. 12 (5) Given these definitions, the decision maker is now asked to assign a utility subcriterion value is to set a at satellite its where maximum the level, highest while the ranked other subcriteria are set at their respective minimum levels. (6) Seo and Sakawa proved that the utility value given by the decision maker for this type of formulation is the weighing factor of the maximized subcriterion, x1. The process is repeated for each group of subcriterion. The next two sections of the survey form creates the overall utility function. Here the decision maker ranks the criteria in descending order, then assess their relative weights. The next section has proven to be a little bit tricky for these reasons. Maximizing a criterion takes place when all of its respective subcriteria are maximized. Similarly, a criterion is minimized when its subcriteria are minimized. Hence, the decision maker is attempting to assign a utility to a situation where a large number of the criteria are set at fixed values. Despite of the difficulty, this method is still infinitely superior to lottery selections, as it is still much easier to work than with lottery questions. The analyst guidance in this section, along with the 13 experience gained by the decision maker in section four of the survey form, make this task manageable. IV. Utility Function Calibration The reason for establishing preferential utility independence in section two of the survey is that they are necessary conditions for a multiplicative utility function of the form. (7) Where the W(Xi)'s, U(Xi)'s are the univariate utility functions. The wi's are the calibrated weighing factors, and k is the normalizing parameter that allows multivarieate utility function to also be scaled from zero (worst) to one (best). The same functional form holds for both the criterion utility function and the overall utility function. Subcriterion weights are derived from questions in section three and four of the survey, and the criterion weights are calculated using subcriterion's information (criterion's) in section weighing five factor and six. A is found by multiplying the weighing factor assigned to the highest ranked subcriterion (criterion) by the subcriterion's (criterion's) relative weight. Clemens offers some discussions on an interesting implication of these weighing factors. When the weights of a set of criteria are summed, if they are added to less than one, the criteria are said to be substitutes for each other. Conversely, if 14 the sum is greater than one, they are compliments of each other. This insight, provided to the decision maker during the survey process, can greatly assist the decision maker's thought process in assigning weights. Once the weights have been determined, the calibration k can be calculated. function. We begin with a multiplicative form of the utility We then set all of the criteria to the maximum values, making each U(Xi) = 1. This simplifies the equation to (8) This equation is expanded and the constant is moved to the right hand side. For the four criteria case, we now have (9) We can solve for k by finding the root of this equation, as is done in the traditional MAUT literature. The process is repeated for each criterion and finally for the overall function. V. Multicriteria Optimization Model Typically, the utility functions so calibrated are used to select a subset of the alternatives for implementation given the limited resources. In a limiting case, one often uses a linear program to solve such class of problems, even though the original 15 problem may in fact be nonlinear. Consider the following linear program (10) Here the cost coefficients in the objective function represent the utility functions for the various alternatives j and xj is the amount of resources allocated to alternative j. The tolerance approach of sensitivity analysis determines how much each objective function utility can simultaneously and independently vary without effecting optimality. The tolerance, J , is determined from this formula (11) The numerator in the above equation is the "reduced cost" of the case nonbasic variable, where K is a set of nonbasic variables. The formula is derived from the classic idea of sensitivity, as represented by the perturbation programming formulation. ( cj' When cj' as shown in the linear is set equal to cj[cap??] c[cap]j, ( represents a percentage variation from each original coefficient. J is a conservative estimate of the coefficient variation that can occur while still maintaining the original 16 optimal basis. question how Such sensitivity analysis essentially answers the one decision maker's decision remains unchanged irrespective of the pressure exerted by other decision makers. This gives us a vehicle for approximating the group utility function. VI. Characterizing a Group Utility Function A substantial weakness in the tolerance approach to a sensitivity analysis is found when there are alternative optimal solutions to the multicriteria mathematical program. Alternate optimal solutions are indicated when there is at least one "reduced cost" equal to zero for a nonbasic variable. tolerance, J , equals 0 When this occurs, the (Wendell, 1985), this means that when any change to any objective function coefficient occurs, we cannot be assured of the same optimal solution. Since this happens often in binary programs, Wendell's methodology offers no way to work around this situation. As will be shown in the case study below, familiarity with the problem being modeled allows us to reformulate the model to obviate this problem. Now we can formulate the "perturbed" linear program as described previously. In this formulation, cj is the vector of objective function coefficients, and K is the set of nonbasic variables. relevant linear program objective When one compares the function to the objective function that corresponds to groups preferences, we have 17 (12) To be assured that the groups optimum decision is the same as that of the individual decision maker, the coefficients of the objective function corresponding to an individual decision maker must satisfy the following inequality. (13) To show how a group utility function can be characterized by the tolerance calculated above, we assume that the group being modeled has two decision makers -- the localized perspective of decision maker one, and decision maker two with a more global perspective. We will now compare the utilities obtained for the same alternatives set of structures. using the two different preference Note that the rank ordering of the alternatives is generally not the same for the two decision makers. If we assume the group utility function takes on a additive form such as the one shown below (14) We can specify, using the tolerance, limits to the weighing factors that allow the optimal solution to be unchanged. Let us say the greatest percentage difference in utilities between the decision makers occur with alternative k. To calculate the 18 critical weights, where a change in optimal solution can take place, we solve the following set of equations (15) Solving these equations for the two unknowns w1 and w2, we find that w1 has to be greater than or equal to a particular value. This means that we are guaranteed the same optimum solution only if decision maker one's preferences are weighted at least at this threshold. The attached figure entitled group utility function characterization graphically shows this result. The group utility function isovalue curve is a straight line since we have assumed an additive group utility function for this case. When w1 is bigger than or equal to the threshold, the tangent point to the highest attainable group utility function isovalue is along the efficient frontier defined for decision maker one. When w1 is varied below the threshold (and the corresponding slope of the group utility function isovalue lines decrease the point of tangency is on the undefined efficient frontier of either the proxy decision maker or the group. Remember that this result is obtained independent of any utilities of an alternative associated with the proxy decision maker. When the preferences of the proxy decision maker are specified, the range of group utility functions weights that allows decision maker one's preferences to prevail is considerably wider. The figure entitled "Group Utility Function Characterization With 19 Proxy Decision Maker Efficient Frontier" graphically shows this result. Note that the optimum group utility function isovalue always falls tangent to either decision maker one's or the proxy decision maker's efficient frontier. A group efficient frontier does not become apparent in this case. Now we can make some statements concerning the group utility function. Given decision maker one's utilities, we can specify a range of weighing factors where we can guarantee that decision maker one's decision. preferences will prevail in optimizing his/her The tolerances and weighing ranges should at first appear extremely conservative. When the proxy decision maker's preferences are defined, the group's optimum solution differs from the launch decision maker's optimum solution only if w1 is less than or equal to (1-$ ) in the group utility function. Further, when one alternative is replaced by another, the optimum solution remains the same regardless of the weighing factors! However, the tolerance approach does not consider the individual utilities associated with other decision makers. scenario" situation. It assesses a "worst case When we replace the current alternative with one which is not in the worst case scenario, the weights used in the group utility function are trivial because both decision makers agree on the rank ordering of the alternatives. Thus, we see that if the changes in utilities of an alternative (the coefficients of the linear program's objective function) remain within our tolerance, we are guaranteed the same optimal solution for the group's decision and for the original individual's decision. 20 VII. CASE STUDY Rather than confining as our research to only theoretical interests, we chose to apply the theory to a model that could be used as a real decision analysis tool. We concentrated developing a model for evaluating satellite systems. makers are involved. on Two decision The first is within the intelligence circle while the other one is involved in the launch scheduling and prioritization. Analysts at the national level often have difficulty in comparing between satellites and their prioritization decisions. An MAUT model allows them to perform such cross comparison using the common utile scale of the multi-attribute utility function. On the other hand, prioritizing space launch continues to be a major concern in the current environment of limited resources. Launch schedulers face the dilemma of scheduling resources that cannot accommodate all of the space communities launch needs. An MAUT model quantifies the utilities of the proposed satellites to be launched, which feeds into a multi-criteria optimization model which maximizes the total utility of the satellites launched using constraint launch resources. In this case, the national perspective is likely to be different from the launch perspective, wherein the former has a global view while the latter has more of a local viewpoint. Four criteria, which are in turn decomposed into subcriteria, were chosen. These criteria and subcriteria are necessary and sufficient to describe the decision makers entire process of considering the utility of a space system. This two-tiered 21 hierarchy is shown in the figure entitled "Model Criteria." The "environment" criterion defines the value associated with the time dependent "state of the world." To provide consistent value ratings for the satellites, a "snapshot" is taken and scored. The "mission impact" criterion attempts to determine the value of a satellite's missions relative to the missions of other satellites. As such, the rating given may reflect an entire class of satellites -- for example, those used for early warning. The "cost/domestic commitment" criterion reflects the value placed on the satellite by the nation to support the continuation of its mission. Finally, the "satellite status" criterion takes into account the individual characteristics of the satellite. Using these criteria and subcriteria, a multi-attribute utility function was calibrated for both the launch decision maker and the national decision maker -- label as decision makers one and two, respectively. The multi-attribute utility functions yield the following utilities and prioritization among satellites. Specifically, the rank ordering as predicted by the decision maker is compared with the rank ordering as computed by the model. Overall, the results were excellent in both cases, no reversals between the expected rankings and the model rankings occurred. At this point, we turned our research towards seeing if our utility function could be expressed using a simpler form. We were interested if an edited function, rather than a multiplicative one could still yield equivalent results. looking for strategic equivalents. In particular, we were Strategic equivalents applies 22 to the ordinal ranking of alternatives. If two utility functions use the same rank ordering for a set of alternatives, they are said to be strategically equivalent. Rather than readminister the survey instrument to the decision maker, we used the data collected in the original survey. Recall that in the edited form, the weights of the criteria sum to one, whereas in the multiplicative form they do not. And reusing the original survey, we assume that the relative weights between criteria which were reported by the decision maker are unchanged when adding the constraint that they sum to one. This is a reasonable assumption, since the decision maker was not made aware in the original survey of how the weights would be summed. Once this assumption is made calculating the weights for the additive function is simple. The following formula is used, showing the three criterion case as an example: (16) w1 is the most important criterion, as indicated by the decision maker, and w2/w1 and w3/w1 are the relative weights reported by the decision maker. With the weights calculated, the satellite data we collected earlier was put in to the additive model. A comparison of the rank ordering achieved for each utility function form was made. utility The model uses the additive form for both the criterion functions hierarchy. and the overall utility function in the It was found that the rank ordering were different. 23 The additive form is not strategically equivalent to the multipicative form. However, we found that there can be strategic equivalents under certain limited circumstances, at least for the overall utility function. We examined the case where the environment and mission impact criteria are held constant while the other two criteria are varied. This corresponds to the situation where a group of satellites hold the same environmental scores and mission impact scores are rank ordered. Given these limitations, we found that strategic equivalents holds between the multiplicative and additive forms of the utility function. This was true for all values of the varied criteria, and for all values tested for the criteria held constant. The figure entitled "Strategic Equivalents" graphically shows strategic equivalents for the case where the environment and mission impact criteria scores are both 0.5. Note the equivalent utility isovalue lines for the additive and multiplicative functional forms do not cross each other within the range of the varied criteria. are very nearly parallel. In this particular case, they Strategic equivalents was indicated in this limited case for both decision makers. Hence, while strategic equivalents does not hold in general, we find in at least one practical application, the additive form may be used to achieve the same ordinal rankings as the original function. Now let us focus our attention on the multi-criteria optimization model representing scheduling of satellite launches. The problem can be formulated as a maximum flow network model. The 24 maximum flow through the network is equal to the most constrained point in the scheduling process. Recall that two satellites can be processed simultaneously through the solid motor assembly building/solid motor assembly and readiness facility and the launch pads. Either of these two points can be modeled as the critical point in the processing flow. adjusted model. We will use the launch pads in our If we confine our interests now to only whether a satellite is launched, discarding the scheduling portion of the problem, the model can be simplified. Since two satellites can be simultaneously processed on the pads, satellites Kc and Kd are not constrained, as they are each the only satellite to be launched within their respective launch windows. However, there are three satellites Ka, Kb, and Ke, with the same launch window, which are constrained by the pad capacities. This reduced model is shown below. (17) Here the decision variables A through E represent launch go or no go decisions (1-0 in value respectively). The coefficients of the objective function represent the utilities of satellites Ka through Ke. This simplified model yields the same optimal objective 25 function value as the original network flow model. However, the solution is now unique, and we can apply the tolerance approach. Here the perturbed linear program has cj = (0.452, 0.644, 0.377, 0.386, 0.450, 0, 0, 0, 0, 0, 0, 0, 0); K = {E, S3, S4, S5, S7}. The zero values in cj are the select variable coefficients, which are added during the problem solution. Likewise, the Sis are also select variables, in this case, those that are not in the optimum basis. Further, we find that J = 0.0022. That is, each coefficient in the objective function (the satellite utilities) can simultaneously and independently vary up to 0.22 % without changing the optimum solution. In applying tolerances to characterize a group utility function, we refer to the launch operators as decision maker number one and the national prioritization as decision maker two. Given the following utilities given for the same set of satellite using two different preference structures. Satellite Launch utilities National Ka 0.452 0.669 Kb 0.644 0.740 Kc 0.377 0.623 Kd 0.386 0.610 Ke 0.450 0.673 utilities Launch rank 2 1 5 4 3 ordering National rank 3 1 4 5 2 ordering The greatest percentage difference in satellite utilities between the decision makers occur with satellite Kc. To calculate the critical weights (where a change in optimum solution can take 26 place) we solve the set of simultaneous equations with " = 0.377. Solving these equations, we find that w1 $ 0.9967. Given the launch decision maker's utilities, we can specify a range of weighing factors where we can guarantee that the launch decision maker's preferences will prevail in optimizing the launch schedule. tight. In this particular case, the ranges is exceptionally This is because two of the satellites competing for the same launch window, Ka and Ke, have utilities which differ by only 0.02. If we rerun this model where satellite Ke is replaced by Kd, we can calculate a tolerance of 5.12%, and a corresponding range of w1 $ 0.92. When the proxy decision maker's preferences are defined, the group's optimum solution defers from the launch decision maker's optimum solution only if w1 # b in the group utility function. Further, when satellite Ke is replaced by Kd, the optimum solution remains the same regardless of the weighing factors! The tolerance approach assess a worst case scenario, which in this case is where the utility Ke (originally 0.450) rises 0.22 %, and the utility of satellite Ka (originally 0.452) falls 0.22 %. When we use Kd vice Ke, the weights used in a group utility function are trivial because the decision makers agree on the rank ordering of the satellites. VIII. CONCLUSION Overall, this research was quite successful. Of our prime theoretical concern was whether we could substantially simplify the MAUT survey process. Usually survey instruments are so complex 27 that the decision maker is confused to the extent that his responses become inconsistent, and the decision maker often leaves the process with little confidence in its validity. major simplifications to the survey We made three instrument, which were primarily centered on eliminating the need to use lottery questions to capture the decision maker's preference structure. Kirkwood's assumption of exponential univariate utility curves provided the first simplification. fractile methods. This eliminated the need for using the The fractile method has been criticized for its lack of consistent results. (de Neufville, 1990) Next, we reduced the number of pairwise comparisons required to verify the utility independence by taking advantage of Keeney and Raiffa's "weak conditions." As they point out, without these conditions we must examine 2n-2 utility independence. 2n-2 utility independence assumptions for n attributes (Keeney and Raiffa, 1976). For our model, this is 8,190 verifications! These conditions reduce the number of verifications to twelve, a far more manageable undertaking. The determined preferences between criteria without using lottery questions, we use a combination of methodologies by Seo, Sakawa and Clemens. theory, the decision As a result of applying Seo and Sakawa's maker needed only to indicate the most important criteria, and then express the weights of the other criteria as a ratio to this criteria. Clemens provided the means to determine the weight to be assigned to the most important criterion via the swing weight method. Using this weight and the 28 weight ratio between the criteria, the remaining weights could be derived. The MAUT model thirteen criteria. reconstructed was fairly complex, having Taking advantage of a hierarchial organization reduce the number of required pair comparisons from 78 to 21. This organization allowed a logical grouping of criteria, and eliminated the difficult task of comparing criteria which were greatly dissimilar. Our survey simplifications put the entire survey to be administered in an average time of less than two hours, without the use of frustrating lottery questions. In the case study, the decision maker left the survey with a good understanding of the process and was confident in the outcome of the model. We determined that in general an additive utility functional form could not be successfully substituted for the multiplicative form. limited However, the additive form could be used for at least one case, when rank ordering satellites with the same environment in performing the same mission. Here we demonstrated that the utilities calculated for a set of satellites can be used to optimize the use of individual satellites. The multi-criteria optimization model showed that a set of constraints can preclude a satellite with a high utility from being selected for launch, while selecting a satellite with a relatively low utility for launch. Most importantly, using the results of the optimization problem, we were able to make some observations about how the use of a group utility function might 29 influence the decision being made. Using the tolerance approach, we were able to state the conditions under which the launch decision makers decision would be guaranteed to prevail in a group environment. examined, we More critically, in the two decision maker case we were able to state these conditions without determining the efficient frontier of the second decision maker, and without specifying the form of the group utility function. For future extensions, there is great deal of potential for future research in group utility functions. First, the case of three or more decision makers should be examined. Future research should attempt to model the group utility function using the multiplicative form, since the additive form rarely applies to real decision problems. Historical launch decisions should then be used to validate the improved model. utility function is difficult While the exact form of a group explicitly to determine, these theoretical advances might be used to approximate the launch decision maker's preferences. The improved model can then be used as a tool to assist in making future launch decisions. 30 REFERENCES Clemen, R. T. (1991) Making Hard Decisions, Belmont Duxburry Press de Neufville, R. (1990) Applied Systems Analysis: Planning & Technology Management Engineering McGraw-Hill Keeney, R. L.; Raiffa, H. (1976) Decisions With Multiple Objectives: Preferences In Value Tradeoffs, John Wiley & Sons Kirkwood, C. W. (1994) Structured Decision Making, Arizona State University, Tempe, Arizona Marvin, F.; Hutchinson, L. MAU or AHP Which is Better?, Unpublished Paper, Analytic Sciences Corp., Reston, Virginia Seo, F.; Sakawa, M. (1988) Multiple Criteria Decision Analysis In Regional Planning, Reidel Publishing Company Wendell, R. E., "The Tolerance Approach To Sensitivity Analysis In Limited Programming" Management Science, Vol 31 No. 5, Pages 564578 31

Tutor Answer

Fridah G
School: UIUC

Attached.

Last name 1
Name
Course
Instructor
Date
Multiattribute Utility Theory
Staats and Chan make use of the multi-attribute utility theory (MAUT) in order to define
a mathematical illustration of a decision maker’s utility. The authors were concerned with
whether or not it is possible to substantially simplify the survey process of MAUT. This is
because survey instruments are usually so complex such that the decision makers get confused to
the level that their responses become inconsistent. As a result, such decision makers often tend to
leave the process with minute confidence in its validity. The authors make the survey simpler to
administer through elimination of the need to use lottery questions. Staats and Chan simplify the
survey instrument in three major ways that are centered primarily on eliminating the prerequisite
to use lottery questions in order to capture the preference structure of the decision makers.
The first simplification involves the use of Kirkwood's postulation of exponential
univariate function curves. This eliminates the need to use the fractile method that has been
under criticisms for its failure to provide consistent results (de Neufville, 1990). The authors then
reduce the quantity of pairwise comparisons that are needed so as to verify the utility
individuality through taking advantage of the "weak conditions” of Keeney and Raiffa.
According to Kee...

flag Report DMCA
Review

Anonymous
Awesome! Exactly what I wanted.

Similar Questions
Hot Questions
Related Tags

Brown University





1271 Tutors

California Institute of Technology




2131 Tutors

Carnegie Mellon University




982 Tutors

Columbia University





1256 Tutors

Dartmouth University





2113 Tutors

Emory University





2279 Tutors

Harvard University





599 Tutors

Massachusetts Institute of Technology



2319 Tutors

New York University





1645 Tutors

Notre Dam University





1911 Tutors

Oklahoma University





2122 Tutors

Pennsylvania State University





932 Tutors

Princeton University





1211 Tutors

Stanford University





983 Tutors

University of California





1282 Tutors

Oxford University





123 Tutors

Yale University





2325 Tutors