Contrasting Fire and Flooding Hazards Fire
Flooding
Triggering Event
Ignition
Impact
Causal Factors
Very large number of possible causes Relatively few causes : Grounding, Collision, Structural Failure
Design Measures
•Better/Safer Materials and Insulation •Safer Equipment •Better Subdivision •Better Fire-Fighting Equipment •Spatial Distribution of Combustibles and Ignition Sources •Circumventing Individual Causal Factors (this is a huge design space).
•Better Subdivision and Ballasting •Better Structural Strength •Better Navigation Systems.
Measures Enforceable at Concept Design Stage
Some of the above Measures
Most of the above Measures
Risk Modelling Method for Fire Risk would be Different from that with Flooding • The causes and preventive (design) measures for fire hazards are by far more diverse than that for flooding. • The information model necessary to reason about fire safety is much bigger (far more than just geometry and connectivity of spaces). This difference entails that fire risk should be modelled
Risk Model for Fire Safety A grand-aggregate or single summary statistic is less useful for fire risk. Why ? • To avoid compensation effect. Intuitively : Three good aspects or cases should not be allowed to compensate for one bad-bad-bad case. Severity and frequency are not inter-changeable in the high stake regime. • The normative decision theory of human rationality (i.e. the one that entails the total order of choices in terms of risk=probability*severity) holds good in the low and medium-stake regime, but not in high stake regimes. Example : what would any normal person choose between? (A) receiving fortune of 3 billion pounds with a probability of 100% (B) receiving a fortune of 4 billion pounds with a probability of 80% Life-safety is a similar high stake proposition for which individuals and the society would (given an explicit choice) not act in accordance with normative decision theory. And that is not irrational if factors such as regret, reputation, incrimination etc. are taken into account.
Causal Factors for Ship Fires
quite differently.
Fire Risk Model Risk calculation is normally done using event trees. The following are possible choices of root edges for fire risk event trees: CAUSAL DISTRIBUTION: The historical frequency distribution of fire incidents over the causes. SPATIAL DISRIBUTION: The historical frequency distribution of fire incidents over ship spaces or over ship space types. CONFIGURATIONAL DISTRIBUTION: The historical frequency distribution of fire incidents over configurational features (such as presence of a certain machinery, or placement of certain machinery combinations in the same enclosure). Why is Causal Distribution important ? Because it directly enables the designer to design on preventive and measures. The event sub-tree of each causal factor gives the risk attributed to each cause. Such risk figures associated with each task help prioritize the design tasks.
How does the designer use the causal distribution of risk? E.g. Design of under-ventilated, well-insulated, well monitored holds for flammable substance. E.g. Design of properly fused and sufficient electrical plugpoints in crew cabins to rule out the need for jerryrigging and overloading. E.g. Keeping combustible material away from all such places where welding might be carried out. E.g. Detection and correction of fuel vapour leakages. E.g. Avoid grease build-up in the Galley grills. E.g. Prohibit smoking except in smoking areas. Enforce using sensitive and location aware smoke alarms. But how to fit such measures in a holistic quantitative risk model?
There are literally hundreds of possible causes of ignition.
1
Causally rooted Event Trees Little Loss
Controlled by Sprinkler?
Yes
Cigarette Ignites Linen
Extinguished By Occupant
Yes No
Detected By Detector? No
Little Loss Welding Spark Ignites Oil-Rug
Controlled by Sprinkler?
Yes
Extinguished By Occupant
Yes No
Each root is a causal factor. The leaves have a loss measure.
Fire reaches Flashover
Spatial Distribution of Fire Risk Spatial distribution of fire events is also very useful: • The spatial distribution is more directly usable for MonteCarlo simulation of fire risk. • A spatial distribution (shipboard map) of fire incidence and severity allows the designer to decide on the distribution of safety provisions on the ship.
Detected By Detector? No
Fire reaches Flashover
Each branch has an associated probability. The risk is calculated by summing branch products.
• A spatial map of fire risk levels helps the designer to decide on the placement of facilities, subsystems and of risk control provisions.
The causal distribution of risk defines design tasks and helps prioritize
Configurational Distribution of Fire Risk • It helps the designer to identify certain combinations or configurations as more hazardous than others. • It captures vital knowledge about safety of design configurations. • It serves to quantify and generalize the prescriptive rules about proximity and isolation of subsystems. • The configuration may be discrete and non-ordered but it still may be optimized by combinatorial means.
Combined Fire + Flooding Risk Index
Advocacy: risk-distribution over risk-index for fire safety • Fire consequence calculation is not yet accurate enough to predict a reliable absolute estimate, but relative evaluations are quite reliable. Distributions (over aspects and extents) provide a more comparative picture. • Averaging causes information loss and can mislead. If Mr. Average overeats and Mrs. Average is severely anorexic, on an average the Average couple might have a very balanced diet. • How long before, faced with a conflict between fire safety and a commercial interest, the designer would adjust the average by enhancing the fire safety of another set of cases.
Fire Risk Analysis Analysis, noun, separation of a whole into its component parts (Etymology: New Latin, from Greek, from analyein to break up, from ana- + lyein to loosen) - Merriam-Webster’s dictionary
• Under such a hypothetical regulatory framework, faced with a fire safety problem the designer might as well want to adjust the combined index by improving the flooding safety, or vice versa.
• To be true to the meaning, risk analysis should involve breaking or distributing risk into its component parts, not combining possible parts into one single number.
• Under automated optimization the above would happen inadvertently and inevitably.
• Refined risk distribution gives greater insight to the designer than an aggregated numeral.
• “Rational” as it might be from normative viewpoint, being a high stake proposition the human society would not find it acceptable.
• Caveat: It can be shown that if a ship has a damage stability vulnerability in the aft subdivision, its index A can still be improved by ignoring that fact and improving the fwd subdivision. The compensation effect can potentially hurt.
2
Fire and Flooding Combined Risk Analysis Framework Key Requirements: 1. Generation of “accident” cases according to objectivist distribution of initial conditions. 2. Predictive capability for consequence estimation, given the initial distribution and the evolution process parameters. 3. Archiving and classification of cases based on several aspects. Each classification of a large number of results would give a distribution of consequence over one or more features of the scenario. 4. Such a multi-taxonomy archive would be a gold-mine for risk analysis for the design process. 5. Clustering over consequence features will also enrich the designers knowledge the design space with regard to safety.
A risk index model may also be used in addition to risk distributions
(
frN ( N ) = ∑ frhz (hz j ) ⋅ prN N hz j nhz
j =1
)
Loss of life may be used a common currency for expression of fire and flooding cost • Risk is a chance of loss of life • The chance is measured by statistics PLL ≡
N max N max
∑ ∑ fr ( j ) i =1 j =i
loss scenarios:
N
flooding fire
loss of life (expected number of fatalities per year)
~90% of the risk
intact stability loss other
3
Loss aversion - Wikipedia, the free encyclopedia
1 of 3
http://en.wikipedia.org/w/index.php?title=Loss_aversion&printable=yes
Loss aversion From Wikipedia, the free encyclopedia
The neutrality of this article is disputed. Please see the discussion on the talk page.
In prospect theory, loss aversion refers to the tendency for people strongly to prefer avoiding losses than acquiring gains. Some studies suggest that losses are as much as twice as psychologically powerful as gains. Loss aversion was first convincingly demonstrated by Amos Tversky and Daniel Kahneman. This leads to risk aversion when people evaluate a possible gain; since people prefer avoiding losses to making gains. This explains the curvilinear shape of the prospect theory utility graph in the positive domain. Conversely people strongly prefer risks that might possibly mitigate a loss (called risk seeking behavior). Loss aversion may also explain sunk cost effects. Note that whether a transaction is framed as a loss or as a gain is very important to this calculation: would you rather get a 5% discount, or avoid a 5% surcharge? The same change in price framed differently has a significant effect on consumer behavior. Though traditional economists consider this "endowment effect" and all other effects of loss aversion to be completely irrational, that is why it is so important to the fields of marketing and behavioral finance.
Contents 1 Can loss aversion ever be rational? 2 An Alternative Example 3 See also 4 References and links
Can loss aversion ever be rational? There is an important critique of the view held by economists that this behaviour is irrational. The implicit assumption of conventional economics is that the only relevant metric is the magnitude of the absolute change in expenditure. In the above example, saving 5% is considered equivalent to avoiding paying 5% extra. This is not the only rational interpretation. Another view is that the most important metric is the magnitude of the relative change in wealth of the decision-maker. Again, referring to the above example, a 5% discount is then not equivalent to avoiding a 5% surcharge. The reasoning is as follows. Take a hypothetical item with a base cost of $1000, and consider two possible scenarios: In the first scenario, the buyer expects to pay $1000, but then is offered a 5% discount. The price is then $950. The change represents a 5% saving. In the second scenario, there is a surcharge of 5%, or $50. The buyer expects to pay $1050. Avoiding the surcharge would mean a price of $1000. Buyers see this as a savings of $50 on what they expected to pay: $1,050. Thus, the perceived savings is 50/1050 x 100% = approx. 4.76%. When the savings relative to the remaining wealth (or stock of money) is different, the value of the transaction changes accordingly. When using this interpretation, decisions made by consumers are not necessarily irrational.
4/26/2007 8:57 PM
Loss aversion - Wikipedia, the free encyclopedia
2 of 3
http://en.wikipedia.org/w/index.php?title=Loss_aversion&printable=yes
Taking this to an extreme, if a person has only $1000, getting $1000 simply doubles their wealth (which would be desirable), but losing $1000 would wipe them out completely (which might be a matter of life and death). In this case, given the need for money for food and shelter in order to survive, the individual will be far more motivated to avoid losing $1000 than to try to gain $1000. In addition, it has been asserted that the effect of relative evaluation is more pronounced the greater the potential amount saved is relative to the total amount the decision-maker has to spend. All of the above effects can be expressed in terms of the utility function of money, and, in particular, not regarding money as a linear measure of utility. In other words, if money has diminishing marginal utility, then each dollar is worth less than the one before it. To use the example before, the first $1000 might be worth $1000 to a person, and the second $1000 worth only $950 (in terms of utility). This would not be "loss aversion" but just a phenomenon adequately explained by economic theory.
An Alternative Example Imagine that your country is preparing for an outbreak of a disease which is expected to kill 600 people. Given the choice between two vaccination schedules, Program A which will save 200 and Program B which will save all 600 with probability 1/3, most will choose Program A. However, if the question is framed as: Imagine that your country is preparing for an outbreak of a disease which is expected to kill 600 people. Given the choice between two vaccination schedules, Program C which will allow 400 people to die and Program D which will let no one die with probability 1/3 and all 600 will die with probability 2/3, most people will choose option D. This is an example of loss aversion: the two situations are identical in quantitative terms, but in the second one the decision maker is losing instead of saving lives, thus setting 0 lives lost as the status quo from which losses are measured, making the sure loss of 400 people more loathsome than the probable loss of 600.
See also Risk aversion Status quo bias List of cognitive biases
References and links Hickman, J. (2203). "Bush in Baghdad" (http://baltimorechronicle.com/dec03_BushinBaghdad.html) . Baltimore Chronicle. Kahneman, D. & Tversky, A. (1979). Prospect Theory: An Analysis of Decision under Risk. Econometrica 47, 263-291. Tversky, A. & Kahneman, D. (1991). Loss Aversion in Riskless Choice: A Reference Dependent Model. Quarterly Journal of Economics 106, 1039-1061. "A Psychological Law of Inertia and the Illusion of Loss Aversion" (http://ssrn.com/abstract=831104) , David Gal, September 2005 "Loss Aversion, Risk, & Framing: The Psychology of an Influence Strategy" (http://www.workingpsychology.com/lossaver.html) , Kelton Rhoads, 1997. Retrieved from "http://en.wikipedia.org/wiki/Loss_aversion" Categories: NPOV disputes | Behavioral Finance | Cognitive biases | Consumer behaviour | Decision theory | Economics of uncertainty This page was last modified 05:50, 17 April 2007. All text is available under the terms of the GNU Free Documentation License. (See Copyrights for details.)
4/26/2007 8:57 PM
Loss aversion - Wikipedia, the free encyclopedia
3 of 3
http://en.wikipedia.org/w/index.php?title=Loss_aversion&printable=yes
Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a US-registered 501(c)(3) tax-deductible nonprofit charity.
4/26/2007 8:57 PM
Risk aversion - Wikipedia, the free encyclopedia
1 of 4
http://en.wikipedia.org/w/index.php?title=Risk_aversion&printable=yes
Risk aversion From Wikipedia, the free encyclopedia
Risk aversion is a concept in economics, finance, and psychology explaining the behaviour of consumers and investors under uncertainty. Risk aversion is the reluctance of a person to accept a bargain with an uncertain payoff rather than another bargain with a more certain but possibly lower expected payoff. The inverse of a person's risk aversion is sometimes called their risk tolerance. For a more general discussion see the main article risk.
Contents 1 Example 2 Utility of money 3 Measures of risk aversion 3.1 Absolute risk aversion 3.2 Relative risk aversion 3.3 Portfolio theory 4 Limitations 5 Other Categories 6 See also 7 External links
Example A person is given the choice between a bet of either receiving $100 or nothing, both with a probability of 50%, or instead, receiving some amount with certainty. Now he is risk-averse if he would rather accept a payoff of less than $50 (for example, $40) with probability 100% than the bet, risk neutral if he was indifferent between the bet and a certain $50 payment, risk-loving if it's required that the payment be more than $50 (for example, $60) to induce him to take the certain option over the bet. The average payoff of the bet, the expected value would be $50. The certain amount accepted instead of the bet is called the certainty equivalent, the difference between it and the expected value is called the risk premium.
Utility of money In utility theory, a consumer has a utility function U(xi) where xi are amounts of goods with index i. From this, it is possible to derive a function u(c), of utility of consumption c as a whole. Here, consumption c is equivalent to money in real terms, i.e. without inflation. The utility function u(c) is defined only modulo linear transformation. The graph shows this situation for the risk-averse player: The utility of the bet, E(u) = (u(0) + u(100)) / 2 is as big as that of the certainty equivalence, CE. The risk premium is
or 25%.
Measures of risk aversion 4/26/2007 8:34 PM
Risk aversion - Wikipedia, the free encyclopedia
2 of 4
http://en.wikipedia.org/w/index.php?title=Risk_aversion&printable=yes
Absolute risk aversion The higher the curvature of u(c), the higher the risk aversion. However, since expected utility functions are not uniquely defined (only up to linear affine transformations), a measure that stays constant is needed. This measure is the Arrow-Pratt measure of absolute risk-aversion (ARA, after the economists Kenneth Arrow and John W. Pratt) or coefficient of absolute risk aversion, defined as . The following expressions relate to this term: Exponential utility of the form u(c) = − e − αc is unique in exhibiting constant absolute risk aversion (CARA): ru(c) = α is constant with respect to c. Decreasing/increasing absolute risk aversion (DARA/IARA) if ru(c) is decreasing/increasing. An example for a DARA utility function is u(c) = ln(c),ru(c) = 1 / x, while u(c) = c − αc2,α > 0,ru(c) = 2α / (1 − 2αc) would represent a utility function exhibiting IARA. Experimental and empirical evidence is most consistent with decreasing absolute risk aversion.
Relative risk aversion The Arrow-Pratt measure of relative risk-aversion (RRA) or coefficient of relative risk aversion is defined as . Like for absolute risk aversion, the corresponding terms constant relative risk aversion (CRRA) and decreasing/increasing relative risk aversion (DRRA/IRRA) are used. This measure has the advantage that it is still a valid measure of risk aversion, even if it changes from risk-averse to risk-loving, i.e. is not strictly convex/concave over all c. In intertemporal choice problems, the elasticity of intertemporal substitution is often the same as the coefficient of relative risk aversion. The "isoelastic" utility function
exhibits constant relative risk aversion with Ru(c) = ρ. When ρ = 1 this simplifies to the case of log utility, and the income effect and substitution effect on saving exactly offset.
Portfolio theory In modern portfolio theory, risk aversion is measured as the marginal reward an investor wants to receive if he takes for a new amount of risk. In Modern Portfolio Theory, risk is being measured as standard deviation of the return on investment, i.e. the square root of its variance. In advanced portfolio theory, different kinds of risk are taken in consideration. They are being measured as the n-th radical of the n-th central moment. The symbol used for risk aversion is A or An.
Limitations 4/26/2007 8:34 PM
Risk aversion - Wikipedia, the free encyclopedia
3 of 4
http://en.wikipedia.org/w/index.php?title=Risk_aversion&printable=yes
The notion of (constant) risk aversion has come under criticism from behavioral economics. According to Matthew Rabin of UC Berkeley, a consumer who, from any initial wealth level [...] turns down gambles where she loses $100 or gains $110, each with 50% probability [...] will turn down 50-50 bets of losing $1,000 or gaining any sum of money. The point is that if we calculate the constant relative risk aversion (CRRA) from the first small-stakes gamble it will be so great that the same CRRA, applied to gambles with larger stakes, will lead to absurd predictions. The bottom line is that we cannot infer a CRRA from one gamble and expect it to scale up to larger gambles. It is noteworthy that Rabin's article has often been wrongly quoted as a justification for assuming risk neutral behavior of people in small stake gambles. The major solution to the problem observed by Rabin is the one proposed by prospect theory and cumulative prospect theory, where outcomes are considered relative to a reference point (usually the status quo), rather than to consider only the final wealth.
Other Categories See "Harm Reduction". Risk aversion theory can be applied to many aspects of life and its challenges, for example: Bribery and corruption - whether the risk of being implicated or caught outweighs the potential personal or professional rewards Drugs - whether the risk of having a bad trip outweighs the benefits of possible transformative one; whether the risk of defying social bans is worth the experience of alteration. Sex - judgement whether an experience that goes against social convention, ethical mores or common health prescriptions is worth the risk. Extreme sports - having the ability to go against biological predepositions like the fear of height.
See also Risk Utility Risk premium Investor profile St. Petersburg paradox Compulsive gambling, a contrary behavior
External links More thorough introduction (http://cepa.newschool.edu/het/essays/uncert/aversion.htm#pratt) Prof. Rabin's homepage (http://emlab.berkeley.edu/users/rabin/) A response (2001) by Ariel Rubinstein (http://arielrubinstein.tau.ac.il/papers/rabin3.pdf) Paper about problems with risk aversion (http://repositories.cdlib.org/cgi/viewcontent.cgi?article=1025&context=iber/econ) Economist article on monkey experiments showing behaviours resembling risk aversion (http://www.economist.com/science/displayStory.cfm?story_id=4102350) Retrieved from "http://en.wikipedia.org/wiki/Risk_aversion" Categories: Articles with unsourced statements since April 2007 | All articles with unsourced statements | Actuarial science | Behavioral Finance | Economics of uncertainty | Motivation | Risk | Risk in finance This page was last modified 23:28, 22 April 2007. All text is available under the terms of the GNU Free Documentation License. (See Copyrights for details.)
4/26/2007 8:34 PM
Risk aversion - Wikipedia, the free encyclopedia
4 of 4
http://en.wikipedia.org/w/index.php?title=Risk_aversion&printable=yes
Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a US-registered 501(c)(3) tax-deductible nonprofit charity.
4/26/2007 8:34 PM
Risk neutral - Wikipedia, the free encyclopedia
1 of 1
http://en.wikipedia.org/w/index.php?title=Risk_neutral&printable=yes
Risk neutral From Wikipedia, the free encyclopedia
In economics, the term risk neutral is used to describe an individual who cares only about the expected return of an investment, and not the risk (variance of outcomes or the potential gains or losses). A risk-neutral person will neither pay to avoid risk nor actively take risks. Risk neutral is in between risk aversion and risk seeking.
The value that a risk-neutral individual assigns to a financial instrument is equal to the value of the instrument in each scenario, weighted by the probability of each scenario (in other words, it is equal to the expected value). Because of this, the term risk-neutral probabilities (or risk-neutral probability distribution) is used to refer to probabilities (or a distribution) which when used as weights in an expected-value calculation will reproduce the market value of financial instruments. In general, risk-neutral probabilities differ from real-world probabilities because the market does not assign value in the same way that a risk-neutral individual would. A far more mathematically advanced definition is "A risk neutral world is one where investors are assumed to require no extra return on average for bearing risks" Retrieved from "http://en.wikipedia.org/wiki/Risk_neutral" Categories: Economics and finance stubs | Risk in finance This page was last modified 14:22, 19 March 2007. All text is available under the terms of the GNU Free Documentation License. (See Copyrights for details.) Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a US-registered 501(c)(3) tax-deductible nonprofit charity.
4/26/2007 8:35 PM
Risk-neutral measure - Wikipedia, the free encyclopedia
1 of 3
http://en.wikipedia.org/w/index.php?title=Risk-neutral_measure&print...
Risk-neutral measure From Wikipedia, the free encyclopedia
In mathematical finance, a risk-neutral measure is a probability measure in which today's fair (i.e. arbitrage-free) price of a derivative is equal to the expected value (under the measure) of the future payoff of the derivative discounted at the risk-free rate.
Contents 1 Background 2 Example 1 - Binomial Model of Stock Prices 3 Example 2 - Brownian Motion Model of Stock Prices 4 Notes 5 See also
Background The measure is so-called because, under that measure, all financial assets in the economy have the same expected rate of return, regardless of the 'riskiness' - i.e. the variability in the price - of the asset.[1] This is in contrast to the physical measure - i.e. the actual probability distribution of prices where (almost universally[2]) more risky assets (those assets with a higher price volatility) have a greater expected rate of return than less risky assets. Risk-neutral measures make it easy to express in a formula the value of a derivative. Suppose at some time T in the future a derivative (for example, a call option on a stock) pays off HT units, where HT is a random variable on the probability space describing the market. Further suppose that the discount factor from now (time zero) until time T is P(0, T), then today's fair value of the derivative is
where the risk-neutral measure is denoted by Q. This can be re-stated in terms of the physical measure P as
where
is the Radon-Nikodym derivative of Q with respect to P.
Another name for the risk-neutral measure is the equivalent martingale measure. A particular financial market may have one or more risk-neutral measures. If there is just one then there is a unique arbitrage-free price for each asset in the market. This is the fundamental theorem of arbitrage-free pricing. If there is more than one such measure then there is an interval of prices in which no arbitrage is possible. In this case the equivalent martingale measure terminology is more commonly used.
Example 1 - Binomial Model of Stock Prices Suppose that we have a two state economy: the initial stock price S can go either up to Su or down Sd. If the interest rate is R-1>0, and we have the following relation Sd < RS < Su, then the risk-neutral probability of an
4/26/2007 8:37 PM
Risk-neutral measure - Wikipedia, the free encyclopedia
2 of 3
http://en.wikipedia.org/w/index.php?title=Risk-neutral_measure&print...
upward stock movement is given by the number . Given a derivative that has payoff Xu when the stock price moves upward and Xd when the stock price goes downward, we can price the derivative via .
Example 2 - Brownian Motion Model of Stock Prices Suppose that our economy consists of one stock, one risk-free bond and that our model describing the evolution of the world is the Black-Scholes model. In the model the stock has dynamics
where Wt is a standard Brownian motion with respect to the physical measure. If we define
then Girsanov's theorem states that there exists a measure Q under which
is a Brownian motion.
is recognizable as the market price of risk. Substituting in we have
Q is the unique risk neutral measure for the model. The (discounted) payoff process of a derivative on the stock is a martingale under Q. Since S and H are Q-martingales we can invoke the martingale representation theorem to find a replicating strategy - a holding of stocks and bonds that pays off Ht at all times .
Notes 1. ^ In fact the rate of return is equal to the short rate in the measure. 2. ^ This is true in all risk-averse markets, which consists of all large financial markets. Example of risk-seeking markets are casinos and lotteries. A player could choose to play no casino games (zero risk, expected return zero) or play some games (significant risk, expected negative return). The player pays a premium to have the entertainment of taking a risk.
See also Forward measure Rational pricing fundamental theorem of arbitrage-free pricing
4/26/2007 8:37 PM
Risk-neutral measure - Wikipedia, the free encyclopedia
3 of 3
http://en.wikipedia.org/w/index.php?title=Risk-neutral_measure&print...
Retrieved from "http://en.wikipedia.org/wiki/Risk-neutral_measure" Categories: Derivatives | Mathematical finance | Probability theory This page was last modified 04:30, 7 March 2007. All text is available under the terms of the GNU Free Documentation License. (See Copyrights for details.) Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a US-registered 501(c)(3) tax-deductible nonprofit charity.
4/26/2007 8:37 PM
Risk - Wikipedia, the free encyclopedia
1 of 8
http://en.wikipedia.org/wiki/Risk
Risk From Wikipedia, the free encyclopedia
Risk is a concept that denotes a potential negative impact to an asset or some characteristic of value that may arise from some present process or future event. In everyday usage, "risk" is often used synonymously with the probability of a known loss. Paradoxically, a probable loss can be uncertain and relative in an individual event while having a certainty in the aggregate of multiple events (see risk vs. uncertainty below). Risk communication and risk perception are essential factors for every human decision making.
Contents 1 Definitions of risk 1.1 Scientific background 1.2 Risk vs. uncertainty 2 Insurance and health risk 3 Economic risk 3.1 In business 3.2 Risk-sensitive industries 3.2.1 In finance 3.3 In public works 4 Risk at work 5 Psychology of risk 5.1 Regret 5.2 Framing 5.3 Fear as intuitive risk assessment 6 References 6.1 Papers 6.2 Books 6.3 Magazines 6.4 Journals 6.5 Societies 7 See also 8 External links
Definitions of risk Many definitions of risk depend on specific application and situational contexts. Generally, a qualitative risk is considered proportional to the expected losses which can be caused by a risky event and to the probability of this event. The harsher the loss and the more likely the event, the greater the overall risk. Measuring risk is often difficult; rare failures can be hard to estimate, and loss of human life is generally considered beyond estimation. An engineering definition of risk is: . Financial risk is often defined as the unexpected variability or volatility of returns, and thus includes both potential worse than expected as well as better than expected returns. References to negative risk below should be read as applying to positive impacts or opportunity (e.g. for loss read "loss or gain") unless the context 4/26/2007 8:38 PM
Risk - Wikipedia, the free encyclopedia
2 of 8
http://en.wikipedia.org/wiki/Risk
precludes. In statistics, risk is often mapped to the probability of some event which is seen as undesirable. Usually the probability of that event and some assessment of its expected harm must be combined into a believable scenario (an outcome) which combines the set of risk, regret and reward probabilities into an expected value for that outcome. (See also Expected utility) Thus in statistical decision theory, the risk function of an estimator δ(x) for a parameter θ, calculated from some observables x; is defined as the expectation value of the loss function L,
where: δ(x) = estimator θ = the parameter of the estimator There are many informal methods used to assess or to "measure" risk, although it is not usually possible to directly measure risk. Formal methods measure the value at risk. In scenario analysis "risk" is distinct from "threat." A threat is a very low-probability but serious event - which some analysts may be unable to assign a probability in a risk assessment because it has never occurred, and for which no effective preventive measure (a step taken to reduce the probability or impact of a possible future event) is available. The difference is most clearly illustrated by the precautionary principle which seeks to reduce threat by requiring it to be reduced to a set of well-defined risks before an action, project, innovation or experiment is allowed to proceed. In information security a "risk" is defined as a function of three variables: 1. the probability that there's a threat 2. the probability that there are any vulnerabilities 3. the potential impact. If any of these variables approaches zero, the overall risk approaches zero. The management of actuarial risk is called risk management.
Scientific background Scenario analysis matured during Cold War confrontations between major powers, notably the USA and USSR. It became widespread in insurance circles in the 1970s when major oil tanker disasters forced a more comprehensive foresight. The scientific approach to risk entered finance in the 1980s when financial derivatives proliferated. It reached general professions in the 1990s when the power of personal computing allowed for wide spread data collection and numbers crunching. Governments are apparently only now learning to use sophisticated risk methods, most obviously to set standards for environmental regulation, e.g. "pathway analysis" as practiced by the US Environmental Protection Agency.
Risk vs. uncertainty In his seminal work "Risk, Uncertainty, and Profit", Frank Knight (1921) established the distinction between risk and uncertainty.
4/26/2007 8:38 PM
Risk - Wikipedia, the free encyclopedia
3 of 8
“
http://en.wikipedia.org/wiki/Risk
... Uncertainty must be taken in a sense radically distinct from the familiar notion of Risk, from which it has never been properly separated. The term "risk," as loosely used in everyday speech and in economic discussion, really covers two things which, functionally at least, in their causal relations to the phenomena of economic organization, are categorically different. ... The essential fact is that "risk" means in some cases a quantity susceptible of measurement, while at other times it is something distinctly not of this character; and there are far-reaching and crucial differences in the bearings of the phenomenon depending on which of the two is really present and operating. ... It will appear that a measurable uncertainty, or "risk" proper, as we shall use the term, is so far different from an unmeasurable one that it is not in effect an uncertainty at all. We ... accordingly restrict the term "uncertainty" to cases of the non-quantitive type.
”
Insurance and health risk Insurance is a risk-reducing investment in which the buyer pays a small fixed amount to be protected from a potential large loss. Gambling is a risk-increasing investment, wherein money on hand is risked for a possible large return, but with the possibility of losing it all. Purchasing a lottery ticket is a very risky investment with a high chance of no return and a small chance of a very high return. In contrast, putting money in a bank at a defined rate of interest is a risk-averse action that gives a guaranteed return of a small gain and precludes other investments with possibly higher gain. Risks in personal health may be reduced by primary prevention actions that decrease early causes of illness or by secondary prevention actions after a person has clearly measured clinical signs or symptoms recognized as risk factors. Tertiary prevention (medical) reduces the negative impact of an already established disease by restoring function and reducing disease-related complications. Ethical medical practice requires careful discussion of risk factors with individual patients to obtain informed consent for secondary and tertiary prevention efforts, whereas public health efforts in primary prevention require education of the entire population at risk. In each case, careful communication about risk factors, likely outcomes and certainty must distinguish between causal events that must be decreased and associated events that may be merely consequences rather than causes.
Economic risk In business Means of assessing risk vary widely between professions. Indeed, they may define these professions; for example, a doctor manages medical risk, while a civil engineer manages risk of structural failure. A professional code of ethics is usually focused on risk assessment and mitigation (by the professional on behalf of client, public, society or life in general). See also: Insurance industry Financial risk Credit risk Interest rate risk Legal risk Liquidity risk Market risk Investment risk
4/26/2007 8:38 PM
Risk - Wikipedia, the free encyclopedia
4 of 8
http://en.wikipedia.org/wiki/Risk
Reinvestment risk
Risk-sensitive industries Some industries manage risk in a highly quantified and numerate way. These include the nuclear power and aircraft industries, where the possible failure of a complex series of engineered systems could result in highly undesirable outcomes. The usual measure of risk for a class of events is then, where P is probability and C is consequence;
The total risk is then the sum of the individual class-risks. In the nuclear industry, 'consequence' is often measured in terms of off-site radiological release, and this is often banded into five or six decade-wide bands. See also: Operational risk Safety engineering The risks are evaluated using Fault Tree/Event Tree techniques (see safety engineering). Where these risks are low they are normally considered to be 'Broadly Acceptable'. A higher level of risk (typically up to 10 to 100 times what is considered broadly acceptable) has to be justified against the costs of reducing it further and the possible benefits that make it tolerable - these risks are described as 'Tolerable if ALARP'. Risks beyond this level are classified as 'Intolerable'. The level of risk deemed 'Broadly Acceptable' has been considered by Regulatory bodies in various countries an early attempt by UK government regulator & academic F. R. Farmer used the example of hill-walking and similar activities which have definable risks that people appear to find acceptable. This resulted in the so-called Farmer Curve, of acceptable probability of an event versus its consequence. The technique as a whole is usually referred to as Probabilistic Risk Assessment (PRA), (or Probabilistic Safety Assessment, PSA). See WASH-1400 for an example of this approach. In finance In finance, risk is the probability that an investment's actual return will be different than expected. This includes the possibility of losing some or all of the original investment. It is usually measured by calculating the standard deviation of the historical returns or average returns of a specific investment. In finance "risk" has no one definition, but some theorists, notably Ron Dembo, have defined quite general methods to assess risk as an expected after-the-fact level of regret. Such methods have been uniquely successful in limiting interest rate risk in financial markets. Financial markets are considered to be a proving ground for general methods of risk assessment. However, these methods are also hard to understand. The mathematical difficulties interfere with other social goods such as disclosure, valuation and transparency. In particular, it is often difficult to tell if such financial instruments are "hedging" (purchasing/selling a financial instrument specifically to reduce or cancel out the risk in another investment) or "gambling" (increasing measurable risk and exposing the investor to catastrophic loss in pursuit of very high windfalls that increase expected value). As regret measures rarely reflect actual human risk-aversion, it is difficult to determine if the outcomes of such transactions will be satisfactory. Risk seeking describes an individual whose utility function's second derivative is positive. Such an individual would willingly (actually pay a premium to) assume all risk in the economy and is hence not likely to exist. 4/26/2007 8:38 PM
Risk - Wikipedia, the free encyclopedia
5 of 8
http://en.wikipedia.org/wiki/Risk
In financial markets one may need to measure credit risk, information timing and source risk, probability model risk, and legal risk if there are regulatory or civil actions taken as a result of some "investor's regret". "A fundamental idea in finance is the relationship between risk and return. The greater the amount of risk that an investor is willing to take on, the greater the potential return. The reason for this is that investors need to be compensated for taking on additional risk". "For example, a US Treasury bond is considered to be one of the safest investments and, when compared to a corporate bond, provides a lower rate of return. The reason for this is that a corporation is much more likely to go bankrupt than the U.S. government. Because the risk of investing in a corporate bond is higher, investors are offered a higher rate of return". Risk is generally only a minor factor in pricing of assets. The Black-Scholes pricing theory details exactly why; even if individuals may be risk-averse or risk-seeking, the market as a whole will generally treat each dollar equally, and hence produce a risk-neutral price. Most counterexamples are cases where the market is not likely to be rational, such as flying insurance in airports (which charge some 200 times the rate of risk-neutral insurance for the expectation of a disaster).
In public works In a peer reviewed study of risk in public works projects located in 20 nations on five continents, Flyvbjerg, Holm, and Buhl (2002, 2005) documented high risks for such ventures for both costs [1] (http://flyvbjerg.plan.aau.dk/JAPAASPUBLISHED.pdf) and demand [2] (http://flyvbjerg.plan.aau.dk/Traffic91PRINTJAPA.pdf) . Actual costs of projects were typically higher than estimated costs; cost overruns of 50% were common, overruns above 100% not uncommon. Actual demand was often lower than estimated; demand shortfalls of 25% were common, of 50% not uncommon. Due to such cost and demand risks, cost-benefit analyses of public works projects have proved to be highly uncertain. The main causes of cost and demand risks were found to be optimism bias and strategic misrepresentation. Measures identified to mitigate this type of risk are better governance through incentive alignment and the use of reference class forecasting [3] (http://flyvbjerg.plan.aau.dk/0406DfT-UK%20OptBiasASPUBL.pdf) .
Risk at work In the workplace exist incidental and inherent risks. Incidental risks are those which occur naturally in the business, but are not part of the core of the business. Inherent risks have a negative effect on the operating profit of the business.
Psychology of risk Regret In decision theory, regret (and anticipation of regret) can play a significant part in decision-making, distinct from risk aversion (preferring the status quo in case one becomes worse off).
Framing Framing is a fundamental problem with all forms of risk assessment. In particular, because of bounded rationality (our brains get overloaded, so we take mental shortcuts) the risk of extreme events is discounted because the probability is too low to evaluate intuitively. As an example, one of the leading causes of death is road accidents caused by drunk driving - partly because any given driver frames the problem by largely or totally ignoring the risk of a serious or fatal accident. 4/26/2007 8:38 PM
Risk - Wikipedia, the free encyclopedia
6 of 8
http://en.wikipedia.org/wiki/Risk
The above examples: body, threat, price of life, professional ethics and regret show that the risk adjustor or assessor often faces serious conflict of interest. The assessor also faces cognitive bias and cultural bias, and cannot always be trusted to avoid all moral hazards. This represents a risk in itself, which grows as the assessor is less like the client. For instance, an extremely disturbing event that all participants wish not to happen again may be ignored in analysis despite the fact it has occurred and has a nonzero probability. Or, an event that everyone agrees is inevitable may be ruled out of analysis due to greed or an unwillingness to admit that it is believed to be inevitable. These human tendencies to error and wishful thinking often affect even the most rigorous applications of the scientific method and are a major concern of the philosophy of science. But all decision-making under uncertainty must consider cognitive bias, cultural bias, and notational bias: No group of people assessing risk is immune to "groupthink": acceptance of obviously-wrong answers simply because it is socially painful to disagree. One effective way to solve framing problems in risk assessment or measurement (although some argue that risk cannot be measured, only assessed) is to ensure that scenarios, as a strict rule, must include unpopular and perhaps unbelievable (to the group) high-impact low-probability "threat" and/or "vision" events. This permits participants in risk assessment to raise others' fears or personal ideals by way of completeness, without others concluding that they have done so for any reason other than satisfying this formal requirement. For example, an intelligence analyst with a scenario for an attack by hijacking might have been able to insert mitigation for this threat into the U.S. budget. It would be admitted as a formal risk with a nominal low probability. This would permit coping with threats even though the threats were dismissed by the analyst's superiors. Even small investments in diligence on this matter might have disrupted or prevented the attack-- or at least "hedged" against the risk that an Administration might be mistaken.
Fear as intuitive risk assessment For the time being, people rely on their fear and hesitation to keep them out of the most profoundly unknown circumstances. In "The Gift of Fear", Gavin de Becker argues that "True fear is a gift. It is a survival signal that sounds only in the presence of danger. Yet unwarranted fear has assumed a power over us that it holds over no other creature on Earth. It need not be this way." Risk could be said to be the way we collectively measure and share this "true fear" - a fusion of rational doubt, irrational fear, and a set of unquantified biases from our own experience. The field of behavioral finance focuses on human risk-aversion, asymmetric regret, and other ways that human financial behavior varies from what analysts call "rational". Risk in that case is the degree of uncertainty associated with a return on an asset. Recognizing and respecting the irrational influences on human decision making may do much to reduce disasters caused by naive risk assessments that pretend to rationality but in fact merely fuse many shared biases together.
References Papers Holton, Glyn A. (2004). Defining Risk (http://www.riskexpertise.com/papers/risk.pdf) , Financial Analysts Journal, 60 (6), 19–25. A paper exploring the foundations of risk. (PDF file) Knight, F. H. (1921) Risk, Uncertainty and Profit, Chicago: Houghton Mifflin Company. (Cited at: [4] (http://www.econlib.org/library/Knight/knRUP1.html) , § I.I.26.)
Books
4/26/2007 8:38 PM
Risk - Wikipedia, the free encyclopedia
7 of 8
http://en.wikipedia.org/wiki/Risk
Historian David A. Moss's book When All Else Fails (http://www.hup.harvard.edu/catalog/MOSWHE.html) explains the U.S. government's historical role as risk manager of last resort. Peter L. Bernstien. Against the Gods ISBN 0-471-29563-9. Risk explained and its appreciation by man traced from earliest times through all the major figures of their ages in mathematical circles. Porteous, Bruce T.; Pradip Tapadar (2005). Economic Capital and Financial Risk Management for Financial Services Firms and Conglomerates. Palgrave Macmillan. ISBN 1-4039-3608-0.
Magazines Risk Management Magazine (http://www.rmmagazine.com/) Actuarial News And Risk Management Resource : Home (http://www.actuarialnews.org/) Actuary .NET Actuarial News and Risk Management Info: Home (http://www.actuary.net/) Risk and Insurance : Home (http://www.riskandinsurance.com/)
Journals Risk Analysis: An International Journal: Home (http://www.blackwellpublishing.com/journal.asp?ref=0272-4332/) Journal of Risk Research: Home (http://www.tandf.co.uk/journals/carfax/13669877.html/)
Societies The Society for Risk Analysis: Home (http://www.sra.org/)
See also Adventure Cindynics Civil defense Cost overrun Ergonomy Event chain methodology Hazard and hazard prevention International Risk Governance Council Kelly Criterion For Stock Market Life-critical system Loss aversion Optimism bias Prevention Probabilistic risk assessment Risk analysis Risk aversion Risk homeostasis Risk management Risk-neutral measure Risk register Systemic risk Uncertainty Value at risk
External links Risk Guidelines (http://www.ny1.org/RiskGuide.pdf) Certainty equivalents applet (http://www.gametheory.net/Mike/applets/Risk/) Glossary (http://www.risk-glossary.com/)
4/26/2007 8:38 PM
Risk - Wikipedia, the free encyclopedia
8 of 8
http://en.wikipedia.org/wiki/Risk
Cognitive perspective: Risk Based Reasoning (http://www.riskworld.com/Abstract/1998/SRAEUR98/eu8ab148.htm) - SRA - Europe Conference, 1998 EServer TC Library: Risk Communication (http://tc.eserver.org/dir/Risk-Communication) A Primer on Risk Communication Principles and Practices (http://www.atsdr.cdc.gov/HEC/primer.html) The Risk Management Guide (http://www.ruleworks.co.uk/riskguide) - A to Z and FAQ info Risk made simple (graphic) (http://www.gladstonellc.com/definitions/risk.htm) @RISK software for Risk Analysis in Excel (http://www.palisade.com/risk/) RiskyProject project risk analysis software (http://www.intaver.com) GoldSim - Using Simulation for Risk Analysis (http://www.goldsim.com/Content.asp?PageID=468) Risk Checklist (http://www.spoce.com/) Retrieved from "http://en.wikipedia.org/wiki/Risk" Categories: Cleanup from October 2006 | All pages needing cleanup | Articles with unsourced statements since February 2007 | All articles with unsourced statements | Actuarial science | Core issues in ethics | Risk | Economics of uncertainty This page was last modified 23:43, 23 April 2007. All text is available under the terms of the GNU Free Documentation License. (See Copyrights for details.) Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a US-registered 501(c)(3) tax-deductible nonprofit charity.
4/26/2007 8:38 PM
Life-critical system - Wikipedia, the free encyclopedia
1 of 3
http://en.wikipedia.org/w/index.php?title=Life-critical_system&printa...
Life-critical system From Wikipedia, the free encyclopedia
A life-critical system or safety-critical system is a system whose failure or malfunction may result in: death or serious injury to people, or loss or severe damage to equipment or environmental harm. Risks of this sort are usually managed with the methods and tools of safety engineering. A life-critical system is designed to lose less than one life per billion (109) hours of operation.[1] Typical design methods include probabilistic risk assessment, a method that combines failure modes and effects analysis with fault tree analysis.
Contents 1 Reliability regimes 2 Software engineering for life-critical systems 3 Examples of life-critical systems 3.1 Infrastructure 3.2 Medicine 3.3 Nuclear engineering 3.4 Recreation 3.5 Transport 3.5.1 Automotive 3.5.2 Aviation 4 See also 5 References 6 External links
Reliability regimes Several reliability regimes for life-critical systems exist: Fail-operational systems continue to operate when they fail. Examples of these include train signals, elevators, the gas thermostats in most home furnaces, and passively safe nuclear reactors. Fail-operational mode is sometimes unsafe. Nuclear weapons launch-on-loss-of-communications was rejected as a control system for the U.S. nuclear forces because it is fail-operational: a loss of communications would cause launch, so this mode of operation was considered too risky. Fail-safe systems become safe when they cannot operate. Many medical systems fall into this category. For example, an infusion pump can fail, and as long as it complains to the nurse and ceases pumping, it will not threaten loss of life because its safety interval is long enough to permit a human response. In a similar vein, an industrial or domestic burner controller (of which there are thousands in our homes and workplaces all with explosive and poisoning capabilities), can fail, but must fail in a safe mode (i.e. turn combustion off when they detect faults). Famously, nuclear weapon systems that launch-on-command are fail-safe, because if the communications systems fail, launch cannot be commanded. Fail-secure
4/26/2007 8:42 PM
Life-critical system - Wikipedia, the free encyclopedia
2 of 3
http://en.wikipedia.org/w/index.php?title=Life-critical_system&printa...
systems maintain maximum security when they can not operate. For example, while fail-safe electronic doors unlock during power failures, fail-secure ones lock, possibly trapping people in a burning building. Fault-tolerant systems continue to operate correctly when subsystems operate incorrectly. Some examples include autopilots on commercial aircraft, and control systems for ordinary nuclear reactors. The normal method to tolerate faults is to have several computers continually test the parts of a system, and switch in hot spares for failing subsystems. As long as faulty subsystems are replaced or repaired at normal maintenance intervals, these systems have excellent safety. Interestingly, the computers, power supplies and control terminals used by human beings must all be duplicated in these systems in some fashion.
Software engineering for life-critical systems Software engineering for life-critical systems is particularly difficult, but the avionics industry has succeeded in producing standard methods for producing life-critical avionics software. The standard approach is to carefully code, inspect, document, test, verify and analyse the system. Another approach is to certify a production system, a compiler, and then generate the system's code from specifications. Another approach uses formal methods to generate proofs that the code meets requirements. All of these approaches improve the software quality in safety-critical systems by testing or eliminating manual steps in the development process, because people make mistakes, and these mistakes are the most common cause of potential life-threatening errors.
Examples of life-critical systems Infrastructure Circuit breaker Emergency services dispatch systems Electricity generation, transmission and distribution Fire alarm Fire sprinkler Fuse (electrical) Fuse (hydraulic) Telecommunications Burner Control systems
Medicine The technology requirements can go beyond avoidance of failure, and can even facilitate medical intensive care (which deals with healing patients), and also life support (which is for stabilizing patients). Heart-lung machines Mechanical ventilation systems Infusion pumps and Insulin pumps Radiation therapy machines
Nuclear engineering Nuclear reactor control systems
Recreation Amusement rides Parachutes
Transport Railway signalling and control systems
4/26/2007 8:42 PM
Life-critical system - Wikipedia, the free encyclopedia
3 of 3
http://en.wikipedia.org/w/index.php?title=Life-critical_system&printa...
Automotive Airbag systems Braking systems Seat belts Aviation Air traffic control systems Avionics, particularly fly-by-wire systems Aircrew life support systems Flight planning to determine fuel requirements for a flight
See also Reliability theory Reliable system design Redundancy (engineering) Nuclear reactor Biomedical engineering SAPHIRE (risk analysis software) Formal methods Therac-25
References 1. ^ AC 25.1309-1A
External links An Example of a Life-Critical System (http://shemesh.larc.nasa.gov/fm/fm-why-def-life-critical.html) Retrieved from "http://en.wikipedia.org/wiki/Life-critical_system" Categories: Articles to be expanded since January 2007 | All articles to be expanded | Engineering | Engineering failures | Formal methods | Software engineering | Software engineering disasters | Hazards | Safety | Risk analysis | Safety engineering This page was last modified 03:00, 18 April 2007. All text is available under the terms of the GNU Free Documentation License. (See Copyrights for details.) Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a US-registered 501(c)(3) tax-deductible nonprofit charity.
4/26/2007 8:42 PM
Status quo bias - Wikipedia, the free encyclopedia
1 of 1
http://en.wikipedia.org/w/index.php?title=Status_quo_bias&printable=yes
Status quo bias From Wikipedia, the free encyclopedia
The status quo bias is a cognitive bias for the status quo; in other words, people like things to stay relatively the same. The finding has been observed in many fields, including political science and economics. Kahneman, Thaler and Knetsch created experiments that could produce this effect reliably. They attribute it to a combination of loss aversion and the endowment effect, two ideas relevant to prospect theory. The US states of New Jersey and Pennsylvania inadvertently ran a real life experiment providing evidence of the status quo bias in the early 1990s. As part of tort law reform programs, citizens were offered two options for their automotive insurance: an expensive option giving them full right to sue and a less expensive option with restricted rights to sue. Corresponding options in each state were roughly equivalent. In New Jersey the more expensive option was the default and 75% of citizens selected it, while only 20% chose this option in Pennsylvania where the other option was the default. Similar effects have been shown for contributions to retirement plans, choice of internet privacy policies and the decision to become an organ donor.
See also List of cognitive biases
References Samuelson, W. & R. J. Zeckhauser. (1988). Status quo bias in decision making. Journal of Risk and Uncertainty, 1, pp. 7-59. Kahneman, D., Knetsch, J. L. & Thaler, R. H. (1991). Anomalies: The Endowment Effect, Loss Aversion, and Status Quo Bias. Journal of Economic Perspectives, 5, 1, pp. 193-206 Johnson, E. J., Hershey, J., Meszaros, J., & Kunreuther, H. (1993). Framing, Probability Distortions, and Insurance Decisions. Journal of Risk and Uncertainty, 7, 35-51. Retrieved from "http://en.wikipedia.org/wiki/Status_quo_bias" Categories: Behavioral finance | Cognitive biases This page was last modified 09:23, 14 April 2007. All text is available under the terms of the GNU Free Documentation License. (See Copyrights for details.) Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a US-registered 501(c)(3) tax-deductible nonprofit charity.
4/26/2007 9:01 PM
Prospect theory - Wikipedia, the free encyclopedia
1 of 2
http://en.wikipedia.org/w/index.php?title=Prospect_theory&printable=yes
Prospect theory From Wikipedia, the free encyclopedia
Prospect theory was developed by Daniel Kahneman and Amos Tversky in 1979 as a psychologically realistic alternative to expected utility theory. It allows one to describe how people make choices in situations where they have to decide between alternatives that involve risk, e.g. in financial decisions. Starting from empirical evidence, the theory describes how individuals evaluate potential losses and gains. In the original formulation the term prospect referred to a lottery.
The theory describes such decision processes as consisting of two stages, editing and evaluation. In the first, possible outcomes of the decision are ordered following some heuristic. In particular, people decide which outcomes they see as basically identical and they set a reference point and consider lower outcomes as losses and larger as gains. In the following evaluation phase, people behave as if they would compute a value (utility), based on the potential outcomes and their respective probabilities, and then chose the alternative having a higher utility. The formula that Kahneman and Tversky assume for the evaluation phase is (in its simplest form) given by where are the potential outcomes and their respective probabilities. v is a so-called value function that assigns a value to an outcome. The value function (sketched in the Figure) which passes through the reference point is s-shaped and, as its asymmetry implies, given the same variation in absolute value, there is a bigger impact of losses than of gains (loss aversion). In contrast to Expected Utility Theory, it measures losses and gains, but not absolute wealth. The function w is called a probability weighting function and expresses that people tend to overreact to small probability events, but underreact to medium and large probabilities. To see how Prospect Theory (PT) can be applied in an example, consider a decision about buying an insurance policy. Let us assume the probability of the insured risk is 1%, the potential loss is $1000 and the premium is $15. If we apply PT, we first need to set a reference point. This could be, e.g., the current wealth, or the worst case (losing $1000). If we set the frame to the current wealth, the decision would be to either pay $15 for sure (which gives the PT-utility of v(-15)) or a lottery with outcomes $0 (probability 99%) or $-1000 (probability 1%) which yields the PT-utility of w(1%) v(-1000)+w(99%) v(0)=w(1%) v(-1000). These expressions can be computed numerically. For typical value and weighting functions, the former expression could be larger due to the convexity of v in losses, and hence the insurance looks unattractive. If we set the frame to $-1000, both alternatives are set in gains. The concavity of the value function in gains can then lead to a preference for buying the insurance. We see in this example that a strong overweighting of small probabilities can also undo the effect of the convexity of v in losses: the potential outcome of losing $1000 is overweighted. The interplay of overweighting of small probabilities and concavity-convexity of the value function leads to the so-called four-fold pattern of risk attitudes: risk-averse behavior in gains involving moderate probabilities and of small probability losses; risk-seeking behavior in losses involving moderate probabilities and of small probability gains. This is an explanation for the fact that people, e.g., simultaneously buy lottery tickets and insurances, but still invest money conservatively. Some behaviors observed in economics, like the disposition effect or the reversing of risk aversion/risk seeking in
4/26/2007 8:28 PM
Prospect theory - Wikipedia, the free encyclopedia
2 of 2
http://en.wikipedia.org/w/index.php?title=Prospect_theory&printable=yes
case of gains or losses (termed the reflection effect), can also be explained referring to the prospect theory. An important implication of prospect theory is, that the way economic agents subjectively frame an outcome or transaction in their mind, affects the utility they expect or receive. This aspect has been widely used in behavioral economics and mental accounting. Framing and prospect theory has been applied to a diverse range of situations which appear inconsistent with standard economic rationality; the equity premium puzzle, the status quo bias, various gambling and betting puzzles, intertemporal consumption and the endowment effect. Another possible implication for economics is that utility might be reference based, in contrast with additive utility functions underlying much of neo-classical economics. This means people consider not only the value they receive, but also the value received by others. This hypothesis is consistent with psychological research into happiness, which finds subjective measures of wellbeing are relatively stable over time, even in the face of large increases in the standard of living (Easterlin, 1974; Frank, 1997). The original version of prospect theory gave rise to violations of first-order stochastic dominance. That is, one prospect might be preferred to another even if it yielded a worse outcome with probability one. The editing phase overcame this problem, but at the cost of introducing intransitivity in preferences. A revised version, called cumulative prospect theory overcame this problem by using a probability weighting function derived from Rank-dependent expected utility theory. Cumulative prospect theory can also be used for infinitely many or even continuous outcomes (e.g. if the outcome can be any real number).
Sources Easterlin, Richard A. (1974) "Does Economic Growth Improve the Human Lot?" in Paul A. David and Melvin W. Reder, eds., Nations and Households in Economic Growth: Essays in Honor of Moses Abramovitz, New York: Academic Press, Inc. Frank, Robert H. (1997) "The Frame of Reference as a Public Good", The Economic Journal 107 (November), 1832-1847. Kahneman, Daniel, and Amos Tversky (1979) "Prospect Theory: An Analysis of Decision under Risk", Econometrica, XLVII (1979), 263-291. Post, Thierry, Van den Assem, Martijn J., Baltussen, Guido and Thaler, Richard H., "Deal or No Deal? Decision Making Under Risk in a Large-Payoff Game Show" (April 2006). EFA 2006 Zurich Meetings Paper Available at SSRN: http://www.ssrn.com/abstract=636508 http://prospect-theory.behaviouralfinance.net/
External links An introduction to Prospect Theory (http://www.econport.org/econport/request?page=man_ru_advanced_prospect) Retrieved from "http://en.wikipedia.org/wiki/Prospect_theory" Categories: Behavioral Finance | Economics of uncertainty | Consumer behaviour | Marketing | Finance theories | Decision theory | Motivation | Psychological theories This page was last modified 03:29, 9 April 2007. All text is available under the terms of the GNU Free Documentation License. (See Copyrights for details.) Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a US-registered 501(c)(3) tax-deductible nonprofit charity.
4/26/2007 8:28 PM
Rank-dependent expected utility - Wikipedia, the free encyclopedia
1 of 2
http://en.wikipedia.org/w/index.php?title=Rank-dependent_expected_ut...
Rank-dependent expected utility From Wikipedia, the free encyclopedia
The rank-dependent expected utility model (originally called 'anticipated' utility) is a generalized expected utility model of choice under uncertainty, designed to explain the behaviour observed in the Allais paradox, as well as for the observation that many people both purchase lottery tickets (implying risk-loving preferences) and insure against losses (implying risk aversion). A natural explanation of these observations is that individuals overweight low-probability events such as winning the lottery, or suffering a disastrous insurable loss. In the Allais paradox, individuals appear to forgo the chance of a very large gain to avoid a one per cent chance of missing out on an otherwise certain large gain, but are less risk averse when offered to chance of reducing an 11 per cent chance of loss to 10 per cent. A number of attempts were made to model preferences incorporating probability theory, most notably the original version of prospect theory, presented by Daniel Kahneman and Amos Tversky (1979). However, all such models involved violations of first-order stochastic dominance. In prospect theory, violations of dominance were avoided by the introduction of an 'editing' operation, but this gave rise to violations of transitivity. The crucial idea of rank-dependent expected utility was to overweight only unlikely extreme outcomes, rather than all unlikely events. Formalising this insight required transformations to be applied to the cumulative probability distribution function, rather than to individual probabilities (Quiggin, 1982, 1993). The central idea of rank-dependent weightings was then incorporated by Daniel Kahneman and Amos Tversky into prospect theory, and the resulting model was referred to as cumulative prospect theory (Tversky & Kahneman, 1992).
Formal representation As the name implies, the rank-dependent model is applied to the increasing rearrangement . where
for a transformation function
Note that
and
of
which satisfies
is a probability weight such that
with q(0) = 0, q(1) = 1.
so that the decision weights sum to 1.
References Kahneman, Daniel and Amos Tversky. Prospect Theory: An Analysis of Decision under Risk, Econometrica, XVLII (1979), 263-291. Tversky, Amos and Daniel Kahneman. Advances in prospect theory: Cumulative representation of uncertainty. Journal of Risk and Uncertainty, 5:297–323, 1992. Quiggin, J. (1982), ‘A theory of anticipated utility’, Journal of Economic Behavior and Organization 3(4), 323–43. Quiggin, J. Generalized Expected Utility Theory. The Rank-Dependent Model. Boston: Kluwer Academic Publishers, 1993.
4/26/2007 8:33 PM
Rank-dependent expected utility - Wikipedia, the free encyclopedia
2 of 2
http://en.wikipedia.org/w/index.php?title=Rank-dependent_expected_ut...
Retrieved from "http://en.wikipedia.org/wiki/Rank-dependent_expected_utility" Category: Utility This page was last modified 04:41, 25 March 2007. All text is available under the terms of the GNU Free Documentation License. (See Copyrights for details.) Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a US-registered 501(c)(3) tax-deductible nonprofit charity.
4/26/2007 8:33 PM
Behavioral finance - Wikipedia, the free encyclopedia
1 of 7
http://en.wikipedia.org/w/index.php?title=Behavioral_finance&printab...
Behavioral finance From Wikipedia, the free encyclopedia
Behavioral finance and behavioral economics are closely related
Economics Nobel Laureate Daniel Kahneman, was an important figure in the development of behavioral finance and economics and continues to write extensively in the field.
fields which apply scientific research on human and social cognitive and emotional biases to better understand economic decisions and how they affect market prices, returns and the allocation of resources. The fields are primarily concerned with the rationality, or lack thereof, of economic agents. Behavioral models typically integrate insights from psychology with neo-classical economic theory. Behavioral analyses are mostly concerned with the effects of market decisions, but also those of public choice, another source of economic decisions with some similar biases.
Contents 1 History 2 Methodology 3 Key observations 4 Behavioral finance topics 4.1 Behavioral finance models 4.2 Criticisms of behavioral finance 5 Behavioral economics topics 5.1 Heuristics 5.2 Framing 5.3 Anomalies 6 Criticisms of behavioral economics 7 Key figures in behavioral economics 8 Key scholars in behavioral finance 9 References 10 See also
4/26/2007 9:00 PM
Behavioral finance - Wikipedia, the free encyclopedia
2 of 7
http://en.wikipedia.org/w/index.php?title=Behavioral_finance&printab...
11 External links
History During the classical period, economics had a close link with psychology. For example, Adam Smith wrote The Theory of Moral Sentiments, an important text describing psychological principles of individual behavior; and Jeremy Bentham wrote extensively on the psychological underpinnings of utility. Economists began to distance themselves from psychology during the development of neo-classical economics as they sought to reshape the discipline as a natural science, with explanations of economic behavior deduced from assumptions about the nature of economic agents. The concept of homo economicus was developed, and the psychology of this entity was fundamentally rational. Nevertheless, psychological explanations continued to inform the analysis of many important figures in the development of neo-classical economics such as Francis Edgeworth, Vilfredo Pareto, Irving Fisher and John Maynard Keynes. Psychology had largely disappeared from economic discussions by the mid 20th century. A number of factors contributed to the resurgence of its use and the development of behavioral economics. Expected utility and discounted utility models began to gain wide acceptance, generating testable hypotheses about decision making under uncertainty and intertemporal consumption respectively. Soon a number of observed and repeatable anomalies challenged those hypotheses. Furthermore, during the 1960s cognitive psychology began to describe the brain as an information processing device (in contrast to behaviorist models). Psychologists in this field such as Ward Edwards, Amos Tversky and Daniel Kahneman began to compare their cognitive models of decision making under risk and uncertainty to economic models of rational behavior. Perhaps the most important paper in the development of the behavioral finance and economics fields was written by Kahneman and Tversky in 1979. This paper, 'Prospect theory: Decision Making Under Risk', used cognitive psychological techniques to explain a number of documented anomalies in economic decision making. Further milestones in the development of the field include a well attended and diverse conference at the University of Chicago (see Hogarth & Reder, 1987), a special 1997 edition of the Quarterly Journal of Economics ('In Memory of Amos Tversky') devoted to the topic of behavioral economics and the award of the Nobel prize to Daniel Kahneman in 2002 "for having integrated insights from psychological research into economic science, especially concerning human judgment and decision-making under uncertainty." Prospect theory is an example of generalized expected utility theory. Although not commonly included in discussions of the field of behavioral economics, generalized expected utility theory is similarly motivated by concerns about the descriptive inaccuracy of expected utility theory. Behavioral economics has also been applied to problems of intertemporal choice. The most prominent idea is that of hyperbolic discounting, in which a high rate of discount is used between the present and the near future, and a lower rate between the near future and the far future. This pattern of discounting is dynamically inconsistent (or time-inconsistent), and therefore inconsistent with some models of rational choice, since the rate of discount between time t and t+1 will be low at time t-1, when t is the near future, but high at time t when t is the present and time t+1 the near future.
Methodology At the outset behavioral economics and finance theories were developed almost exclusively from experimental observations and survey responses, though in more recent times real world data has taken a more prominent position. fMRI
4/26/2007 9:00 PM
Behavioral finance - Wikipedia, the free encyclopedia
3 of 7
http://en.wikipedia.org/w/index.php?title=Behavioral_finance&printab...
has also been used to determine which areas of the brain are active during various steps of economic decision making. Experiments simulating market situations such as stock market trading and auctions are seen as particularly useful as they can be used to isolate the effect of a particular bias upon behavior; observed market behavior can typically be explained in a number of ways, carefully designed experiments can help narrow the range of plausible explanations. Experiments are designed to be incentive compatible, with binding transactions involving real money the norm.
Key observations There are three main themes in behavioral finance and economics (Shefrin, 2002): Heuristics: People often make decisions based on approximate rules of thumb, not strictly rational analyses. See also cognitive biases and bounded rationality. Framing: The way a problem or decision is presented to the decision maker will affect his action. Market inefficiencies: There are explanations for observed market outcomes that are contrary to rational expectations and market efficiency. These include mispricings, non-rational decision making, and return anomalies. Richard Thaler, in particular, has written a long series of papers describing specific market anomalies from a behavioral perspective. Recently, Barberis, Shleifer, and Vishny (1998), as well as Daniel, Hirshleifer, and Subrahmanyam (1998) have built models based on extrapolation (seeing patterns in random sequences) and overconfidence to explain security market over- and underreactions, though such models have not been used in the money management industry. These models assume that errors or biases are correlated across agents so that they do not cancel out in aggregate. This would be the case if a large fraction of agents look at the same signal (such as the advice of an analyst) or have a common bias. More generally, cognitive biases may also have strong anomalous effects in aggregate if there is a social contamination with a strong emotional content (collective greed or fear), leading to more widespread phenomena such as herding and groupthink. Behavioral finance and economics rests as much on social psychology within large groups as on individual psychology. However, some behavioral models explicitly demonstrate that a small but significant anomalous group can also have market-wide effects (eg. Fehr and Schmidt, 1999).
Behavioral finance topics Key observations made in behavioral finance literature include the lack of symmetry between decisions to acquire or keep resources, called colloquially the "bird in the bush" paradox, and the strong loss aversion or regret attached to any decision where some emotionally valued resources (e.g. a home) might be totally lost. Loss aversion appears to manifest itself in investor behavior as an unwillingness to sell shares or other equity, if doing so would force the trader to realise a nominal loss (Genesove & Mayer, 2001). It may also help explain why housing market prices do not adjust downwards to market clearing levels during periods of low demand. Applying a version of prospect theory, Benartzi and Thaler (1995) claim to have solved the equity premium puzzle, something conventional finance models have been unable to do. Presently, some researchers in Experimental finance use experimental method, e.g. creating an artificial market by some kind of simulation software to study people's decision-making process and behavior in financial markets.
Behavioral finance models
4/26/2007 9:00 PM
Behavioral finance - Wikipedia, the free encyclopedia
4 of 7
http://en.wikipedia.org/w/index.php?title=Behavioral_finance&printab...
Some financial models used in money management and asset valuation use behavioral finance parameters, for example Thaler's model of price reactions to information, with three phases, underreaction - adjustment overreaction, creating a price trend The characteristic of overreaction is that the average return of asset prices following a series of announcements of good news is lower than the average return following a series of bad announcements. In other words, overreaction occurs if the market reacts too strongly to news that it subsequently needs to be compensated in the opposite direction. As a result, assets that were winners in the past should not be seen as an indication to invest in as their risk adjusted returns in the future are relatively low compared to stocks that were defined as losers in the past. The stock image coefficient
Criticisms of behavioral finance Critics of behavioral finance, such as Eugene Fama, typically support the efficient market theory (though Fama may have reversed his position in recent years). They contend that behavioral finance is more a collection of anomalies than a true branch of finance and that these anomalies will eventually be priced out of the market or explained by appeal to market microstructure arguments. However, a distinction should be noted between individual biases and social biases; the former can be averaged out by the market, while the other can create feedback loops that drive the market further and further from the equilibrium of the "fair price". A specific example of this criticism is found in some attempted explanations of the equity premium puzzle. It is argued that the puzzle simply arises due to entry barriers (both practical and psychological) which have traditionally impeded entry by individuals into the stock market, and that returns between stocks and bonds should stabilize as electronic resources open up the stock market to a greater number of traders (See Freeman, 2004 for a review). In reply, others contend that most personal investment funds are managed through superannuation funds, so the effect of these putative barriers to entry would be minimal. In addition, professional investors and fund managers seem to hold more bonds than one would expect given return differentials.
Behavioral economics topics Models in behavioral economics are typically addressed to a particular observed market anomaly and modify standard neo-classical models by describing decision makers as using heuristics and being affected by framing effects. In general, behavioural economics sits within the neoclassical framework, though the standard assumption of rational behaviour is often challenged.
Heuristics Prospect theory - Loss aversion - Status quo bias - Gambler's fallacy - Self-serving bias
Framing Cognitive framing - Mental accounting - Reference utility - Anchoring
Anomalies Disposition effect - endowment effect - equity premium puzzle - money illusion - dividend puzzle -fairness (inequity aversion) - Efficiency wage hypothesis - reciprocity - intertemporal consumption - present-biased
4/26/2007 9:00 PM
Behavioral finance - Wikipedia, the free encyclopedia
5 of 7
http://en.wikipedia.org/w/index.php?title=Behavioral_finance&printab...
preferences - behavioral life cycle hypothesis - wage stickiness - price stickiness - Visceral influences Earle's Curve of Predictive Reliability - limits to arbitrage - income and happiness - momentum investing
Criticisms of behavioral economics Critics of behavioral economics typically stress the rationality of economic agents (see Myagkov and Plott (1997) amongst others). They contend that experimentally observed behavior is inapplicable to market situations, as learning opportunities and competition will ensure at least a close approximation of rational behavior. Others note that cognitive theories, such as prospect theory, are models of decision making, not generalized economic behavior, and are only applicable to the sort of once-off decision problems presented to experiment participants or survey respondents. Traditional economists are also skeptical of the experimental and survey based techniques which are used extensively in behavioral economics. Economists typically stress revealed preferences, over stated preferences (from surveys) in the determination of economic value. Experiments and surveys must be designed carefully to avoid systemic biases, strategic behavior and lack of incentive compatibility, and many economists are distrustful of results obtained in this manner due to the difficulty of eliminating these problems. Rabin (1998) dismisses these criticisms, claiming that results are typically reproduced in various situations and countries and can lead to good theoretical insight. Behavioral economists have also incorporated these criticisms by focusing on field studies rather than lab experiments. Some economists look at this split as a fundamental schism between experimental economics and behavioral economics, but prominent behavioral and experimental economists tend to overlap techniques and approaches in answering common questions. For example, many prominent behavioral economists are actively investigating neuroeconomics, which is entirely experimental and cannot be verified in the field. Other proponents of behavioral economics note that neoclassical models often fail to predict outcomes in real world contexts. Behavioral insights can be used to update neoclassical equations, and behavioral economists note that these revised models not only reach the same correct predictions as the traditional models, but also correctly predict outcomes where the traditional models failed.
Key figures in behavioral economics Dan Ariely Colin Camerer Ernst Fehr Daniel Kahneman David Laibson George Loewenstein Matthew Rabin Paul Slovic Richard Thaler Amos Tversky
Key scholars in behavioral finance Nicholas Barberis Shlomo Benartzi Kent Daniel
4/26/2007 9:00 PM
Behavioral finance - Wikipedia, the free encyclopedia
6 of 7
http://en.wikipedia.org/w/index.php?title=Behavioral_finance&printab...
David Hirshleifer Harrison Hong Terrance Odean Hersh Shefrin Robert Shiller Andrei Shleifer Meir Statman Jeremy Stein A. Subrahmanyam Richard Thaler
References Camerer, C. F.; Loewenstein, G. & Rabin, R. (eds.) (2003) Advances in Behavioral Economics Barberis, N.; A. Shleifer; R. Vishny (1998) ``A Model of Investor Sentiment Journal of Financial Economics 49, 307-343. Daniel, K.; D. Hirshleifer; A. Subrahmanyam, (1998) ``Investor Psychology and Security Market Over- and Underreactions Journal of Finance 53, 1839-1885. Lawrence A. Cunningham, Behavioral Finance and Investor Governance, 59 Washington & Lee Law Review (2002) (http://papers.ssrn.com/abstract_id=255778) Kahneman, D. & Tversky, A. 'Prospect Theory: An Analysis of Decision under Risk,' Econometrica, XVLII (1979), 263–291 Matthew Rabin 'Psychology and Economics,' Journal of Economic Literature, American Economic Association, vol. 36(1), pages 11-46, March 1998. Shefrin, Hersh (2002) Beyond Greed and Fear: Understanding behavioral finance and the psychology of investing. Oxford Universtity Press Shleifer, Andrei (1999) Inefficient Markets: An Introduction to Behavioral Finance, Oxford University Press Shlomo Benartzi; Richard H. Thaler 'Myopic Loss Aversion and the Equity Premium Puzzle' (1995) The Quarterly Journal of Economics, Vol. 110, No. 1.
See also Cognitive psychology Important publications in behavioral finance(economics) Important publications in behavioral finance(sociology) Neuroeconomics Experimental economics Experimental finance Culture speculation Confirmation bias Hindsight bias Cognitive bias Journal of Behavioral Finance
External links Universiteit Amsterdam; Center for Experimental Economics and Political Decision Making (http://www1.fee.uva.nl/creed) Behavioral Finance Initiative (http://icf.som.yale.edu/research/behav_finance.shtml) of the International Center for Finance at the Yale School of Management
4/26/2007 9:00 PM
Behavioral finance - Wikipedia, the free encyclopedia
7 of 7
http://en.wikipedia.org/w/index.php?title=Behavioral_finance&printab...
Behavioral-Finance Group FAQ / Glossary (http://perso.wanadoo.fr/pgreenfinch/behavioral-finance.htm) History of Behavioral finance (http://www.behaviouralfinance.net/history/) Richard Thaler's 'anomalies' papers (http://gsbwww.uchicago.edu/fac/richard.thaler/research/Anomalies.htm) Behavioural Finance at MoneyScience (http://www.moneyscience.org/tiki/tiki-directory_browse.php?parent=88) Born Suckers - The greatest Wall Street danger of all: you. By ... - Dec. 14, 2004 (http://slate.msn.com/id/2110977) "On the Robustness of Behavioral Economics" - an academic analysis in the Yale Economic Review (http://www.yaleeconomicreview.com/fall2005/behavioraleconomics.php) The Marketplace of Perceptions (http://www.harvardmagazine.com/print/030640.html) Behavioral Finance-Theory and Practical Application (http://www.findarticles.com/p/articles/mi_m1094/is_3_36/ai_78177931) Integrating Traditional and Behavioral Finance (http://www.seiadvisornetwork.com/documents/Integrating_Traditional_and_Behavioral_Finance.pdf) Olivier Brandouy's Experimental finance Page (http://claree.univ-lille1.fr/~brandouy/) JessX(java experimental simulated stock exchange), simulation software for Experimental finance (http://www.jessx.net) The Economist article (http://www.economist.com/finance/displayStory.cfm?story_id=2021010) Rationality Controversy and Economic Theory (http://www.rationalitycontroversy.org/) Institute of Behavioral Finance (http://www.psychologyandmarkets.org/journals/journals_main.html) Retrieved from "http://en.wikipedia.org/wiki/Behavioral_finance" Categories: Wikipedia articles needing factual verification | Applied psychology | Behavioral Finance | Social psychology | Branches of sociology (interdisciplinary) | Information, knowledge, and uncertainty | Financial economics | Finance | Market trends | Schools of economic thought and methodology This page was last modified 11:25, 20 April 2007. All text is available under the terms of the GNU Free Documentation License. (See Copyrights for details.) Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a US-registered 501(c)(3) tax-deductible nonprofit charity.
4/26/2007 9:00 PM
Cumulative prospect theory - Wikipedia, the free encyclopedia
1 of 2
http://en.wikipedia.org/w/index.php?title=Cumulative_prospect_theo...
Cumulative prospect theory From Wikipedia, the free encyclopedia
Cumulative Prospect Theory is a model for descriptive decisions under risk which has been introduced by Amos Tversky and Daniel Kahneman in 1992 (Tversky, Kahneman, 1992). It is a further development and variant of prospect theory. The difference from the original version of prospect theory is that weighting is applied to the cumulative probability distribution function, as in rank-dependent expected utility theory, rather than to the probabilities of individual outcomes. In 2002, Daniel Kahneman received the Bank of Sweden Prize in Economic Sciences in Memory of Alfred Nobel for his contributions to behavioral economics, in particular the development of Cumulative Prospect Theory (CPT).
Contents 1 Outline of the model 2 Differences to Prospect Theory 3 Applications 4 References
Outline of the model
A typical value function in Prospect Theory and Cumulative Prospect Theory. It assigns values to possible outcomes of a lottery.
A typical weighting function in Cumulative Prospect Theory. It transforms objective cumulative probabilities into subjective cumulative probabilities.
The main observation of CPT (and its predecessor Prospect Theory) is that people tend to think of possible outcomes usually relative to a certain reference point (often the status quo) r ather than to the final status, a phenomenon which is called framing effect. Moreover, they have different risk attitudes towards gains (i.e. out comes above the reference point) and losses (i.e. outcomes below the reference point) and care generally more about potential losses than potential gains (loss aversion). Finally, people tend to overweight extreme, but unlikely events, but underweight "average“ events. The last point is a difference to Prospect Theory which assumes that people overweight unlikely events, independently of their relative outcomes.
4/26/2007 8:26 PM
Cumulative prospect theory - Wikipedia, the free encyclopedia
2 of 2
http://en.wikipedia.org/w/index.php?title=Cumulative_prospect_theo...
CPT incorporates these observations in a modification of Expected Utility Theory by replacing final wealth with payoffs relative to the reference point, by replacing the utility function with a value function, depending on this relative payoff, and by replacing cumulative probabilites with weighted cumulative probabilities. In the general case, this leads to the following formula for the subjective utility of a risky outcome described by the probability measure p:
where v is the value function (typical form shown in Figure 1), w is the weighting function (as sketched in Figure 2) , i.e. the integral of the probability measure over all values up to x, is the cumulative
and probability.
This formula is a generalization of the original formulation by Tversky and Kahneman which allows for arbitrary (continuous) outcomes, and not only for finitely many distinct outcomes.
Differences to Prospect Theory The main modification to Prospect Theory is that, as in rank-dependent expected utility theory, cumulative probabilites are transformed, rather than the probabilities itself. This leads to the aforementioned overweighting of extreme events which occur with small probability, rather than to an overweighting of all small probability events. The modification helps to avoid a violation of first order stochastic dominance and enables the above generalization to arbitrary outcome distributions. Prospect Theory can instead only be applied to finitely many outcomes. CPT is therefore on theoretical grounds an improvement over Prospect Theory.
Applications Cumulative prospect theory has been applied to a diverse range of situations which appear inconsistent with standard economic rationality, in particular the equity premium puzzle, the asset allocation puzzle, the status quo bias, various gambling and betting puzzles, intertemporal consumption and the endowment effect.
References Amos Tversky and Daniel Kahneman. Advances in prospect theory: Cumulative representation of uncertainty. Journal of Risk and Uncertainty, 5:297–323, 1992. Retrieved from "http://en.wikipedia.org/wiki/Cumulative_prospect_theory" Categories: Economics of uncertainty | Consumer behaviour | Marketing | Finance theories | Decision theory | Motivation | Psychological theories This page was last modified 01:36, 11 April 2007. All text is available under the terms of the GNU Free Documentation License. (See Copyrights for details.) Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a US-registered 501(c)(3) tax-deductible nonprofit charity.
4/26/2007 8:26 PM
List of cognitive biases - Wikipedia, the free encyclopedia
1 of 5
http://en.wikipedia.org/w/index.php?title=List_of_cognitive_biases&pr...
List of cognitive biases From Wikipedia, the free encyclopedia
Cognitive bias is distortion in the way humans perceive reality (see also cognitive distortion). See also the list of thinking-related topic lists. Some of these have been verified empirically in the field of psychology, others are considered general categories of bias.
Contents 1 Decision-making and behavioral biases 2 Biases in probability and belief 3 Social biases 4 Memory errors 5 Common theoretical causes of some cognitive biases 6 Notes 7 References 8 See also
Decision-making and behavioral biases Many of these biases are studied for how they affect belief formation and business decisions and scientific research. Bandwagon effect — the tendency to do (or believe) things because many other people do (or believe) the same. Related to groupthink, herd behaviour, and manias. Bias blind spot — the tendency not to compensate for one's own cognitive biases. Choice-supportive bias — the tendency to remember one's choices as better than they actually were. Confirmation bias — the tendency to search for or interpret information in a way that confirms one's preconceptions. Congruence bias — the tendency to test hypotheses exclusively through direct testing, in contrast to tests of possible alternative hypotheses. Contrast effect — the enhancement or diminishment of a weight or other measurement when compared with recently observed contrasting object. Déformation professionnelle — the tendency to look at things according to the conventions of one's own profession, forgetting any broader point of view. Endowment effect — "the fact that people often demand much more to give up an object than they would be willing to pay to acquire it".[1] Focusing effect — prediction bias occurring when people place too much importance on one aspect of an event; causes error in accurately predicting the utility of a future outcome. Hyperbolic discounting — the tendency for people to have a stronger preference for more immediate payoffs relative to later
4/26/2007 9:03 PM
List of cognitive biases - Wikipedia, the free encyclopedia
2 of 5
http://en.wikipedia.org/w/index.php?title=List_of_cognitive_biases&pr...
payoffs, the closer to the present both payoffs are. Illusion of control — the tendency for human beings to believe they can control or at least influence outcomes that they clearly cannot. Impact bias — the tendency for people to overestimate the length or the intensity of the impact of future feeling states. Information bias — the tendency to seek information even when it cannot affect action. Loss aversion — "the disutility of giving up an object is greater than the utility associated with acquiring it". [2] (see also sunk cost effects and Endowment effect). Neglect of probability — the tendency to completely disregard probability when making a decision under uncertainty. Mere exposure effect — the tendency for people to express undue liking for things merely because they are familiar with them. Omission bias — The tendency to judge harmful actions as worse, or less moral, than equally harmful omissions (inactions). Outcome bias — the tendency to judge a decision by its eventual outcome instead of based on the quality of the decision at the time it was made. Planning fallacy — the tendency to underestimate task-completion times. Post-purchase rationalization — the tendency to persuade oneself through rational argument that a purchase was a good value. Pseudocertainty effect — the tendency to make risk-averse choices if the expected outcome is positive, but make risk-seeking choices to avoid negative outcomes. Selective perception — the tendency for expectations to affect perception. Status quo bias — the tendency for people to like things to stay relatively the same (see also Loss aversion and Endowment effect).[3] Von Restorff effect — the tendency for an item that "stands out like a sore thumb" to be more likely to be remembered than other items. Zero-risk bias — preference for reducing a small risk to zero over a greater reduction in a larger risk.
Biases in probability and belief Many of these biases are often studied for how they affect business and economic decisions and how they affect experimental research. Ambiguity effect — the avoidance of options for which missing information makes the probability seem "unknown". Anchoring — the tendency to rely too heavily, or "anchor," on one trait or piece of information when making decisions. Anthropic bias — the tendency for one's evidence to be biased by observation selection effects. Attentional bias — neglect of relevant data when making judgments of a correlation or association. Availability heuristic — a biased prediction, due to the tendency to focus on the most salient and emotionally-charged outcome. Clustering illusion — the tendency to see patterns where actually none exist. Conjunction fallacy — the tendency to assume that specific conditions are more probable than general ones. Gambler's fallacy — the tendency to assume that individual random events are influenced by previous random events. For
4/26/2007 9:03 PM
List of cognitive biases - Wikipedia, the free encyclopedia
3 of 5
http://en.wikipedia.org/w/index.php?title=List_of_cognitive_biases&pr...
example, "I've flipped heads with this coin so many times that tails is bound to come up sooner or later." Hindsight bias — sometimes called the "I-knew-it-all-along" effect, the inclination to see past events as being predictable. Illusory correlation — beliefs that inaccurately suppose a relationship between a certain type of action and an effect. Ludic fallacy — the analysis of chance related problems with the narrow frame of games. Ignoring the complexity of reality, and the non-gaussian distribution of many things. Neglect of prior base rates effect — the tendency to fail to incorporate prior known probabilities which are pertinent to the decision at hand. Observer-expectancy effect — when a researcher expects a given result and therefore unconsciously manipulates an experiment or misinterprets data in order to find it (see also subject-expectancy effect). Optimism bias — the systematic tendency to be over-optimistic about the outcome of planned actions. Overconfidence effect — the tendency to overestimate one's own abilities. Positive outcome bias — a tendency in prediction to overestimate the probability of good things happening to them (see also wishful thinking, optimism bias and valence effect). Recency effect — the tendency to weigh recent events more than earlier events (see also peak-end rule). Reminiscence bump — the effect that people tend to recall more personal events from adolescence and early adulthood than from other lifetime periods. Rosy retrospection — the tendency to rate past events more positively than they had actually rated them when the event occurred. Primacy effect — the tendency to weigh initial events more than subsequent events. Subadditivity effect — the tendency to judge probability of the whole to be less than the probabilities of the parts. Telescoping effect — the effect that recent events appear to have occurred more remotely and remote events appear to have occurred more recently. Texas sharpshooter fallacy — the fallacy of selecting or adjusting a hypothesis after the data are collected, making it impossible to test the hypothesis fairly.
Social biases Most of these biases are labeled as attributional biases. Actor-observer bias — the tendency for explanations for other individual's behaviors to overemphasize the influence of their personality and underemphasize the influence of their situation. This is coupled with the opposite tendency for the self in that one's explanations for their own behaviors overemphasize their situation and underemphasize the influence of their personality. (see also fundamental attribution error). Egocentric bias — occurs when people claim more responsibility for themselves for the results of a joint action than an outside observer would. Forer effect (aka Barnum Effect) — the tendency to give high accuracy ratings to descriptions of their personality that supposedly are tailored specifically for them, but are in fact vague and general enough to apply to a wide range of people. For example, horoscopes. False consensus effect — the tendency for people to overestimate the degree to which others agree with them. Fundamental attribution error
4/26/2007 9:03 PM
List of cognitive biases - Wikipedia, the free encyclopedia
4 of 5
http://en.wikipedia.org/w/index.php?title=List_of_cognitive_biases&pr...
— the tendency for people to over-emphasize personality-based explanations for behaviors observed in others while under-emphasizing the role and power of situational influences on the same behavior (see also actor-observer bias, group attribution error, positivity effect, and negativity effect). Halo effect — the tendency for a person's positive or negative traits to "spill over" from one area of their personality to another in others' perceptions of them (see also physical attractiveness stereotype). Illusion of asymmetric insight — people perceive their knowledge of their peers to surpass their peers' knowledge of them. Illusion of transparency — people overestimate others' ability to know them, and they also overestimate their ability to know others. Ingroup bias — preferential treatment people give to whom they perceive to be members of their own groups. Just-world phenomenon — the tendency for people to believe that the world is "just" and therefore people "get what they deserve." Lake Wobegon effect — the human tendency to report flattering beliefs about oneself and believe that one is above average (see also worse-than-average effect, and overconfidence effect). Notational bias — a form of cultural bias in which a notation induces the appearance of a nonexistent natural law. Outgroup homogeneity bias — individuals see members of their own group as being relatively more varied than members of other groups. Projection bias — the tendency to unconsciously assume that others share the same or similar thoughts, beliefs, values, or positions. Self-serving bias — the tendency to claim more responsibility for successes than failures. It may also manifest itself as a tendency for people to evaluate ambiguous information in a way beneficial to their interests (see also group-serving bias). Self-fulfilling prophecy — the tendency to engage in behaviors that elicit results which will (consciously or subconsciously) confirm our beliefs. System justification — the tendency to defend and bolster the status quo, i.e. existing social, economic, and political arrangements tend to be preferred, and alternatives disparaged sometimes even at the expense of individual and collective self-interest. Trait ascription bias — the tendency for people to view themselves as relatively variable in terms of personality, behavior and mood while viewing others as much more predictable.
Memory errors Further information: Memory bias False memory Hindsight bias, also known as the 'I-knew-it-all-along effect'. Selective Memory
Common theoretical causes of some cognitive biases Attribution theory, especially: Salience Cognitive dissonance, and related:
4/26/2007 9:03 PM
List of cognitive biases - Wikipedia, the free encyclopedia
5 of 5
http://en.wikipedia.org/w/index.php?title=List_of_cognitive_biases&pr...
Impression management Self-perception theory Heuristics, including: Availability heuristic Representativeness heuristic Adaptive Bias
Notes 1. ^ (Kahneman, Knetsch, and Thaler 1991: 193) Richard Thaler coined the term "endowment effect." 2. ^ (Kahneman, Knetsch, and Thaler 1991: 193) Daniel Kahneman, together with Amos Tversky, coined the term "loss aversion." 3. ^ (Kahneman, Knetsch, and Thaler 1991: 193)
References Baron, J. (2000). Thinking and deciding (3d. edition). New York: Cambridge University Press. ISBN 0-521-65030-5 Bishop, Michael A & Trout, J.D. (2004). Epistemology and the Psychology of Human Judgment. New York: Oxford University Press. ISBN 0-19-516229-3 Gilovich, T. (1993). How We Know What Isn't So: The Fallibility of Human Reason in Everyday Life. New York: The Free Press. ISBN 0-02-911706-2 Gilovich, T., Griffin D. & Kahneman, D. (Eds.). (2002). Heuristics and biases: The psychology of intuitive judgment. Cambridge, UK: Cambridge University Press. ISBN 0-521-79679-2 Kahneman, D., Slovic, P. & Tversky, A. (Eds.). (1982). Judgment under Uncertainty: Heuristics and Biases. Cambridge, UK: Cambridge University Press. ISBN 0-521-28414-7 Kahneman, Daniel, Jack L. Knetsch, and Richard H. Thaler. (1991). "Anomalies: The Endowment Effect, Loss Aversion, and Status Quo Bias." The Journal of Economic Perspectives 5(1):193-206. Plous, S. (1993). The Psychology of Judgment and Decision Making. New York: McGraw-Hill. ISBN 0-07-050477-6
See also Attribution theory Systematic bias Groupthink Logical fallacy Media bias Self-deception System justification Retrieved from "http://en.wikipedia.org/wiki/List_of_cognitive_biases" Categories: Cognitive biases | Cognition | Cognitive science | Psychology lists This page was last modified 13:17, 21 April 2007. All text is available under the terms of the GNU Free Documentation License. (See Copyrights for details.) Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a US-registered 501(c)(3) tax-deductible nonprofit charity.
4/26/2007 9:03 PM
SAPHIRE - Wikipedia, the free encyclopedia
1 of 3
http://en.wikipedia.org/w/index.php?title=SAPHIRE&printable=yes
SAPHIRE From Wikipedia, the free encyclopedia
SAPHIRE is a probabilistic risk and reliability assessment software tool. SAPHIRE stands for Systems Analysis Programs for Hands-on Integrated Reliability Evaluations. The system was developed for the U.S. Nuclear Regulatory Commission (NRC) by the Idaho National Laboratory. Development began in the mid-1980s when the NRC began exploring two notions: 1) that Probabilistic Risk Assessment (PRA) information could be displayed and manipulated using the emerging microcomputer technology of the day and 2) the rapid advancement of PRA technology required a relatively inexpensive and readily available platform for teaching PRA concepts to students.
Contents 1 The history of SAPHIRE 2 Advanced Analysis 3 Basic Event Probabilities 4 See also 5 External links
The history of SAPHIRE 1987 Version 1 of the code called IRRAS (now known as SAPHIRE) introduced an innovative way to draw, edit, and analyze graphical fault trees. 1989 Version 2 is released incorporating the ability to draw, edit, an d analyze graphical event trees. 1990 Analysis improvements to IRRAS led to the release of Version 4 and the formation of the IRRAS Users Group. 1992 Creation of 32-bit IRRAS, Version 5, resulted in an order-of-magnitude decrease in analysis time. New features included: end state analysis; fire, flood, and seismic modules; rule-base cut set processing; and rule-based fault tree to event tree linking. 1997 SAPHIRE for Windows, version 6.x, is released. Use of a Windows user-inferface makes SAPHIRE easy to learn. The new "plug-in" feature allows analysts to expand on the built-in probability calculations. 1999 SAPHIRE for Windows, version 7.x, is released. Enhancements are made to the event tree "linking rules" and to the use of dual language capability inside the SAPHIRE database. 2005 SAPHIRE for Windows, version 8.x, undergoes development. The evolution of software and related analysis methods has led to the current generation of the SAPHIRE tool. The current SAPHIRE software code-base started in the mid-1980s as part of the NRC’s general risk activities. In 1986, work commenced on the precursor to the SAPHIRE software – this software package was named the Integrated Reliability and Risk Analysis System, or IRRAS. IRRAS was the first IBM compatible PC-based risk analysis tool developed at the Idaho National Laboratory, thereby allowing users to work in a graphical interface rather than with mainframe punch cards. While limited to the analysis of only fault trees of medium size, version 1 of IRRAS was the initial step in the progress that today has led to the SAPHIRE software, software that is capable of running on multiple processors simultaneously and is able to handle extremely large analyses.
4/26/2007 8:56 PM
SAPHIRE - Wikipedia, the free encyclopedia
2 of 3
http://en.wikipedia.org/w/index.php?title=SAPHIRE&printable=yes
Advanced Analysis SAPHIRE contains an advanced minimal cut set solving engine. This solver, which has been fine tuned and optimized over time, has a variety of techniques for analysis, including: Extensive use of recursive routines Restructuring and expansion of the logic model Conversion of complemented gates and treatment of success branches Logic pruning due to TRUE or FALSE house events Coalescing gates and the identification of modules and independent sub-trees Intermediate results caching Bit-table Boolean absorption Use of these and other optimization methods has resulted in SAPHIRE having one of the most powerful analysis engines in use for probabilistic risk assessment today.
Basic Event Probabilities General basic event probability capabilities for SAPHIRE include: Four different Markov models to represent the failure of a single component A common cause module to determine a group common cause failure probability for groups of up to six redundant components A load-capacity calculation allowing the user to specify a load and ca pacity distribution to determine P(Capacity < Load) A human reliability analysis calculator to determine a human failure event probability based upon the task type and compounding performance shaping factors The use of template events which allow for failure information to be shared where applicable A seismic fragility method that uses an associated earthquake acceleration level to determine a components failure probability House events to set basic events to logically true or false or to ignore the event A module to determine the loss-of-offsite power frequency and recoverability SAPHIRE has been designed to handle large fault trees, where a tree may have up to 64,000 basic events and gates. To handle the fault trees, two mechanisms for developing and modifying the fault tree are available – a graphical editor and a hierarchical logic editor. Analysts may use either editor; if th e logic is modified SAPHIRE can redraw the fault tree graphic. Conversely, if the user modifies the fault tree graphic, SAPHIRE automatically updates the associated logic. Applicable objects available in the fault tree editors include basic events and several gate types, including: OR, AND, NOR, NAND, and N-of-M. In addition to these objects, SAPHIRE has a unique feature known as “table events” that allows the user to group up to eight basic events together on the fault tree graphic, thereby compacting the size of the fault tree on the printed page or computer screen. All of these objects though represent traditional static-type Boolean logic models. Models explicitly capturing dynamic or time-dependent situations are not available in current versions of SAPHIRE.
See also life-critical system reliability theory safety engineering nuclear reactor biomedical engineering
External links http://saphire.inl.gov Retrieved from "http://en.wikipedia.org/wiki/SAPHIRE"
4/26/2007 8:56 PM
SAPHIRE - Wikipedia, the free encyclopedia
3 of 3
http://en.wikipedia.org/w/index.php?title=SAPHIRE&printable=yes
Categories: Windows software | Reliability engineering This page was last modified 18:31, 25 April 2007. All text is available under the terms of the GNU Free Documentation License. (See Copyrights for details.) Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a US-registered 501(c)(3) tax-deductible nonprofit charity.
4/26/2007 8:56 PM