Description

MGS 3100 Business Analysis Decision Analysis Nov 1, 2010 Decision Analysis Agenda Problems Decision Analysis Open MGS3100_06Decision_Making.xls Decision Alternatives Your options - factors that you have control over A set of alternative actions - We may chose whichever we please

Transcripts

MGS 3100 Business Analysis Decision Analysis Nov 1, 2010

Decision Analysis Agenda Problems

Decision Analysis Open MGS3100_06Decision_Making.xls Decision Alternatives Your choices - elements that you have control over An arrangement of option activities - We may picked whichever we please States of Nature Possible results â not influenced by choice. Probabilities are appointed to every condition of nature Certainty Only one conceivable condition of nature Decision Maker (DM) knows with sureness what the condition of nature will be Ignorance Several conceivable conditions of nature DM Knows every conceivable condition of nature, yet does not know likelihood of event Risk Several conceivable conditions of nature with an assessment of the likelihood of every DM Knows every single conceivable condition of nature, and can dole out likelihood of event for every state

Decision Making Under Ignorance LaPlace-Bayes All conditions of nature are similarly liable to happen. Select option with best normal result Maximax Evaluates every choice by the most extreme conceivable return connected with that choice The choice that yields the greatest of these greatest returns (maximax) is then chosen Maximin Evaluates every choice by the base conceivable return connected with the choice The choice that yields the most extreme estimation of the base returns (maximin) is chosen Minmax Regret The choice is made on minimal misgiving for settling on that decision Select option that will minimize the greatest misgiving

LaPlace-Bayes

Maximax

Maximin

MinMax Regret Table Regret Table: Highest result for condition of nature â result for this choice

Decision Making Under Risk Expected Return ( ER ) or Expected Value ( EV ) or Expected Monetary Value ( EMV ) S j The j th condition of nature D i The i th choice P(S j ) The likelihood that S j will happen R ij The return if D i and S j happen ER j The long haul normal return ER i = S R ij ïª P(S j ) Variance = S (ER i - R ij ) 2 ïª P(S j ) The EMV basis picks the choice option which has the most noteworthy EMV. We'll call this EMV the Expected Value Under Initial Information (EVUII) to recognize it from what the EMV may get to be in the event that we later get more data. Try not to make the normal understudy slip of trusting that the EMV is the result that the leader will get. The genuine result will be the for that option (j) Vi , j and for the State of Nature (i) that really happens.

Decision Making Under Risk One approach to assess the danger connected with an Alternative Action by figuring the settlements' change. Contingent upon your eagerness to acknowledge hazard, an Alternative Action with just a moderate EMV and a little change may be better than a decision that has a substantial EMV furthermore an extensive fluctuation. The settlements' fluctuation for an Alternative Action is characterized as Variance = S (ER i - R ij ) 2 ïª P(S j ) Most of the time, we need to make EMV as expansive as could be allowed and change as little as would be prudent. Shockingly, the most extreme EMV elective and the base difference option are generally not the same so that at last it comes down to an informed careful decision.

Expected Return

Expected Value of Perfect Information EVPI measures how much better you could do on this choice in the event that you could simply comprehend what condition of nature would happen. The E xpected V alue of P erfect I nformation ( EVPI) gives a flat out furthest farthest point on the estimation of extra data (overlooking the estimation of diminished danger). It gauges the sum by which you could enhance your best EMV in the event that you had impeccable data. It is the contrast between the E xpected V alue U nder P erfect I nformation ( EVUPI ) and the EMV of the best activity (EVUII). The Expected Value of Perfect Information measures how much better you could do on this choice, averaging over rehashing the choice circumstance ordinarily, on the off chance that you could simply realize what State of Nature would happen, without a moment to spare to settle on the best choice for that State of Nature. Keep in mind that it doesn't infer control of the States of Nature, simply consummate expectation. Keep in mind likewise that it is a long run normal. It puts a furthest utmost on the estimation of extra data.

Expected Value of Perfect Information EVUPI - Expected Value under impeccable data S P(S i ) ïª max(V ij ) EVUII â EMV of the best activity max(EMV j ) EVPI = EVUPI - EVUII

Expected Value of Perfect Information

Expected Value of Sample Information EVSI = expected estimation of test data EVwSI = expected worth with test data EVwoSI = expected quality without test data EVSI = EVwSI â EVwoSI Efficiency Index = [EVSI/EVPI]100

Agenda Problems Decision Analysis

What sorts of issues? Choices known States of Nature and their probabilities are known. Adjustments calculable under diverse conceivable situations

Basic Terms Decision Alternatives States of Nature (eg. State of economy) Payoffs ($ result of a decision accepting a condition of nature) Criteria (eg. Expected Value) Z

Example Problem 1 - Expected Value & Decision Tree

Expected Value

Decision Tree

Example Problem 2 - Sequential Decisions Would you procure a specialist (or a psychic) to get more information about conditions of nature? How might extra data cause you to reexamine your probabilities of conditions of nature happening? Draw another tree delineating the complete issue. Consultantâs Track Record Z

Example Problem 2 - Sequential Decisions (Ans) Open MGS3100_06Joint_Probabilities_Table.xls First thing you need to do is get the data (Track Record) from the Consultant with a specific end goal to settle on a choice. This reputation can be changed over to resemble this: P(F/S1) = 0.2 P(U/S1) = 0.8 P(F/S2) = 0.6 P(U/S2) = 0.4 P(F/S3) = 0.7 P(U/S3) = 0.3 F= Favorable U=Unfavorable Next, you take this data and apply the former probabilities to get the Joint Probability Table/Bayles Theorum Z

Example Problem 2 - Sequential Decisions (Ans) Open MGS3100_06Joint_Probabilities_Table.xls Next step is to make the Posterior Probabilities (You will require this data to figure your Expected Values) P(S1/F) = 0.06/0.49 = 0.122 P(S2/F) = 0.36/0.49 = 0.735 P(S3/F) = 0.07/0.49 = 0.143 P(S1/U) = 0.24/0.51 = 0.47 P(S2/U) = 0.24/0.51 = 0.47 P(S3/U) = 0.03/0.51 = 0.06 Solve the choice tree utilizing the back probabilities simply processed. Z