Inverse-chi-squared distribution: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>BeyondNormality
en>BeyondNormality
 
Line 1: Line 1:
[[File:Equilibrium refinements.svg|thumb|250px|Selected equilibrium refinements in game theory. Arrows point from a refinement to the more general concept (i.e., ESS <math> \subset </math> Proper).]]
Hi, everybody! My name is Brigida. <br>It is a little about myself: I live in United States, my city of Macon. <br>It's called often Eastern or cultural capital of GA. I've married 4 years ago.<br>I have 2 children - a son (Nancee) and the daughter (Bernadine). We all like Climbing.<br><br>My website - Fifa 15 Coin Generator ([http://www.tuansmx.com/plus/guestbook.php www.tuansmx.com])
 
In [[game theory]], a '''solution concept''' is a formal rule for predicting how a game will be played. These predictions are called "solutions", and describe which strategies will be adopted by players and, therefore, the result of the game. The most commonly used solution concepts are [[Economic equilibrium|equilibrium concepts]], most famously [[Nash equilibrium]].
 
Many solution concepts, for many games, will result in more than one solution. This puts any one of the solutions in doubt, so a game theorist may apply a '''refinement''' to narrow down the solutions. Each successive solution concept presented in the following improves on its predecessor by eliminating implausible equilibria in richer games.
 
==Formal definition==
 
Let <math>\Gamma</math> be the class of all games and, for each game <math>G \in \Gamma</math>, let <math>S_G</math> be the set of [[strategy profile]]s of <math>G</math>.  A ''solution concept'' is an element of the direct product <math>\Pi_{G \in \Gamma}2^{S_G};</math>  ''i.e''., a function <math>F: \Gamma \rightarrow \bigcup\nolimits_{G \in \Gamma} 2^{S_G}</math> such that <math>F(G) \subseteq S_G</math> for all <math>G \in \Gamma.</math>
 
==Rationalizability and iterated dominance==
{{main|Rationalizability}}
 
In this solution concept, players are assumed to be rational and so '''[[dominance (game theory)#dominated strategies|strictly dominated strategies]]''' are eliminated from the set of strategies that might feasibly be played. A strategy is [[dominated strategy|strictly dominated]] when there is some other strategy available to the player that always has a higher payoff, regardless of the strategies that the other players choose. (Strictly dominated strategies are also important in [[minimax]] [[game-tree search]].) For example, in the (single period) [[prisoner's dilemma|prisoners' dilemma]] (shown below), ''cooperate'' is strictly dominated by ''defect'' for both players because either player is always better off playing ''defect'', regardless of what his opponent does.
 
{| class="wikitable" style="margin: 1em auto 1em auto"
! !! Prisoner 2 Cooperate !! Prisoner 2 Defect
|-
! Prisoner 1 Cooperate
| −0.5, −0.5 || −10, 0
|-
! Prisoner 1 Defect
| 0, −10 || '''−2, −2'''
|}
 
==Nash equilibrium==
{{main|Nash equilibrium}}
 
A Nash equilibrium is a [[strategy profile]] (a strategy profile specifies a strategy for every player, e.g. in the above prisoners' dilemma game (''cooperate'', ''defect'') specifies that prisoner 1 plays ''cooperate'' and player 2 plays ''defect'') in which every strategy is a best response to every other strategy played. A strategy by a player is a [[best response]] to another player's strategy if there is no other strategy that could be played that would yield a higher pay-off in any situation in which the other player's strategy is played.
 
==Backward induction==
{{main|Backward induction}}
 
There are games that have multiple Nash equilibria, some of which are unrealistic. In the case of dynamic games, unrealistic Nash equilibria might be eliminated by applying backward induction, which assumes that future play will be rational. It therefore eliminates noncredible threats because such threats would be irrational to carry out if a player was ever called upon to do so.
 
For example, consider a dynamic game in which the players are an incumbent firm in an industry and a potential entrant to that industry. As it stands, the incumbent has a monopoly over the industry and does not want to lose some of its market share to the entrant. If the entrant chooses not to enter, the payoff to the incumbent is high (it maintains its monopoly) and the entrant neither loses nor gains (its payoff is zero). If the entrant enters, the incumbent can fight or accommodate the entrant. It will fight by lowering its price, running the entrant out of business (and incurring exit costs – a negative payoff) and damaging its own profits. If it accommodates the entrant it will lose some of its sales, but a high price will be maintained and it will receive greater profits than by lowering its price (but lower than monopoly profits).
 
If the entrant enters, the best response of the incumbent is to accommodate. If the incumbent accommodates, the best response of the entrant is to enter (and gain profit). Hence the strategy profile in which the incumbent accommodates if the entrant enters and the entrant enters if the incumbent accommodates is a Nash equilibrium. However, if the incumbent is going to play fight, the best response of the entrant is to not enter. If the entrant does not enter, it does not matter what the incumbent chooses to do (since there is no other firm to do it to - note that if the entrant does not enter, fight and accommodate yield the same payoffs to both players; the incumbent will not lower its prices if the entrant does not enter). Hence fight can be considered as a best response of the incumbent if the entrant does not enter. Hence the strategy profile in which the incumbent fights if the entrant does not enter and the entrant does not enter if the incumbent fights is a Nash equilibrium. Since the game is dynamic, any claim by the incumbent that it will fight is an incredible threat because by the time the decision node is reached where it can decide to fight (i.e. the entrant has entered), it would be irrational to do so. Therefore this Nash equilibrium can be eliminated by backward induction.
 
See also:
*[[Monetary policy#Monetary Policy Theory|Monetary policy theory]]
*[[Stackelberg competition]]
 
==Subgame perfect Nash equilibrium==
{{main|Subgame perfect equilibrium}}
 
A generalization of backward induction is subgame perfection. Backward induction assumes that all future play will be rational. In subgame perfect equilibria, play in every [[subgame]] is rational (specifically a Nash equilibrium). Backward induction can only be used in terminating (finite) games of definite length and cannot be applied to games with [[imperfect information]]. In these cases, subgame perfection can be used. The eliminated Nash equilibrium described above is subgame imperfect because it is not a Nash equilibrium of the subgame that starts at the node reached once the entrant has entered.
 
==Perfect Bayesian equilibrium==
{{main|Bayesian game}}
 
Sometimes subgame perfection does not impose a large enough restriction on unreasonable outcomes. For example, since subgames cannot cut through [[Information set (game theory)|information sets]], a game of imperfect information may have only one subgame – itself – and hence subgame perfection cannot be used to eliminate any Nash equilibria. A perfect Bayesian equilibrium (PBE) is a specification of players’ strategies ''and beliefs'' about which node in the information set has been reached by the play of the game. A belief about a decision node is the probability that a particular player thinks that node is or will be in play (on the ''equilibrium path''). In particular, the intuition of PBE is that it specifies player strategies that are rational given the player beliefs it specifies and the beliefs it specifies are consistent with the strategies it specifies.
 
In a Bayesian game a strategy determines what a player plays at every information set controlled by that player. The requirement that beliefs are consistent with strategies is something not specified by subgame perfection. Hence, PBE is a consistency condition on players’ beliefs. Just as in a Nash equilibrium no player’s strategy is strictly dominated, in a PBE, for any information set no player’s strategy is strictly dominated beginning at that information set. That is, for every belief that the player could hold at that information set there is no strategy that yields a greater expected payoff for that player. Unlike the above solution concepts, no player’s strategy is strictly dominated beginning at any information set even if it is off the equilibrium path. Thus in PBE, players cannot threaten to play strategies that are strictly dominated beginning at any information set off the equilibrium path.
 
The ''Bayesian'' in the name of this solution concept alludes to the fact that players update their beliefs according to [[Bayes' theorem]]. They calculate probabilities given what has already taken place in the game.
 
==Forward induction==
 
'''Forward induction''' is so called because just as backward induction assumes future play will be rational, forward induction assumes past play was rational. Where a player does not know what ''type'' another player is (i.e. there is imperfect and asymmetric information), that player may form a belief of what type that player is by observing that player's past actions. Hence the belief formed by that player of what the probability of the opponent being a certain type is based on the past play of that opponent being rational. A player may elect to signal his type through his actions.
 
Kohlberg and Mertens (1986) introduced the solution concept of Stable equilibrium, a refinement that satisfies forward induction. A counter-example was found where such a stable equilibrium did not satisfy backward induction. To resolve the problem [[Jean-François Mertens]] introduced what game theorists now call  [[Mertens-stable equilibrium]] concept, probably the first solution concept satisfying both forward and backward induction.
 
==References==
 
* Cho, I-K. & Kreps, D. M. (1987) Signaling games and stable equilibria. ''Quarterly Journal of Economics'' 52:179–221.
* {{Cite book | last2=Tirole | first2=Jean | author2-link=Jean Tirole | last1=Fudenberg | first1=Drew | title=Game theory | publisher=[[MIT Press]] | isbn=978-0-262-06141-4 | year=1991 | url=http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=8204}}.
* [[John Harsanyi|Harsanyi, J.]] (1973) Oddness of the number of equilibrium points: a new proof. ''International Journal of Game Theory'' 2:235–250.
* Govindan, Srihari & Robert Wilson, 2008. "Refinements of Nash Equilibrium," The New Palgrave Dictionary of Economics, 2nd Edition.[http://myweb.uiowa.edu/sgovinda/Working-Papers/Refinements%20of%20Nash%20equilibrium-Palgrave-Govindan%20and%20Wils%E2%80%A6.pdf]
* Hines, W. G. S. (1987) Evolutionary stable strategies: a review of basic theory. ''Theoretical Population Biology'' 31:195–272.
* Kohlberg, Elon & Jean-François Mertens, 1986. "On the Strategic Stability of Equilibria," Econometrica, Econometric Society, vol. 54(5), pages 1003-37, September.
* {{cite book | last2=Shoham | first2=Yoav | last1=Leyton-Brown | first1=Kevin | title=Essentials of Game Theory: A Concise, Multidisciplinary Introduction  | publisher=Morgan & Claypool Publishers | isbn=978-1-59829-593-1 | url=http://www.gtessentials.org | year=2008 | location=San Rafael, CA}}
* Mertens, Jean-François, 1989. "Stable Equilibria - A reformulation. Part 1 Basic Definitions and Properties," Mathematics of Operations Research,  Vol. 14, No. 4, Nov. [http://www.jstor.org/pss/3689732]
* Noldeke, G. & Samuelson, L. (1993) An evolutionary analysis of backward and forward induction. ''Games & Economic Behaviour'' 5:425–454.
* [[John Maynard Smith|Maynard Smith, J.]] (1982) ''[[Evolution and the Theory of Games]]''.  ISBN 0-521-28884-3
* {{Cite book | last2=Rubinstein | first2=Ariel | author2-link=Ariel Rubinstein | last1=Osborne | first1=Martin J. | title=A course in game theory | publisher=[[MIT Press]] | isbn=978-0-262-65040-3 | year=1994}}.
* [[Reinhard Selten|Selten, R.]] (1983) Evolutionary stability in extensive two-person games. ''Math. Soc. Sci.'' 5:269–363.
* [[Reinhard Selten|Selten, R.]] (1988) Evolutionary stability in extensive two-person games – correction and further development. ''Math. Soc. Sci.'' 16:223–266
* {{cite book | last1=Shoham | first1=Yoav | last2=Leyton-Brown | first2=Kevin | title=Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations | publisher=[[Cambridge University Press]] | isbn=978-0-521-89943-7 | url=http://www.masfoundations.org | year=2009 | location=New York}}
* Thomas, B. (1985a) On evolutionary stable sets. ''J. Math. Biol.'' 22:105–115.
* Thomas, B. (1985b) Evolutionary stable sets in mixed-strategist models. ''Theor. Pop. Biol.'' 28:332–341
 
==See also==
*[[Extensive form game]]
*[[Trembling hand equilibrium]]
*"[[The Intuitive Criterion]]" (Cho and Kreps 1987)
 
{{Game theory}}
 
[[Category:Game theory]]

Latest revision as of 17:14, 19 April 2014

Hi, everybody! My name is Brigida.
It is a little about myself: I live in United States, my city of Macon.
It's called often Eastern or cultural capital of GA. I've married 4 years ago.
I have 2 children - a son (Nancee) and the daughter (Bernadine). We all like Climbing.

My website - Fifa 15 Coin Generator (www.tuansmx.com)