|
|
Line 1: |
Line 1: |
| '''Robust optimization''' is a field of [[optimization (mathematics)|optimization]] theory that deals with optimization problems in which a certain measure of robustness is sought against [[uncertainty]] that can be represented as deterministic variability in the value of the parameters of the problem itself and/or its solution.
| | Hi there, I am Alyson Boon although it is not the title on my birth certificate. Office supervising is where her main income arrives from but she's currently applied for an additional psychics [http://appin.co.kr/board_Zqtv22/688025 online psychic reading] ([http://www.taehyuna.net/xe/?document_srl=78721 http://www.taehyuna.net/]) one. Mississippi is the only location I've been residing in but I will have to transfer in a year or two. What me and my family adore is to climb but I'm considering on beginning something new.<br><br>Here is my weblog - [http://kpupf.com/xe/talk/735373 online psychic readings] |
| | |
| == History ==
| |
| The origins of robust optimization date back to the establishment of modern [[decision theory]] in the 1950s and the use of '''worst case analysis''' and [[Wald's maximin model]] as a tool for the treatment of severe uncertainty. It became a discipline of its own in the 1970s with parallel developments in several scientific and technological fields. Over the years, it has been applied in [[statistics]], but also in [[operations research]],<ref>{{cite journal|last=Bertsimas|first=Dimitris|coauthors=Sim, Melvyn|title=The Price of Robustness|journal=Operations Research|year=2004|volume=52|issue=1|pages=35–53|doi=10.1287/opre.1030.0065}}</ref> [[control theory]],<ref>{{cite journal|last=Khargonekar|first=P.P.|coauthors=Petersen, I.R.; Zhou, K.|title=Robust stabilization of uncertain linear systems: quadratic stabilizability and H/sup infinity / control theory|journal=IEEE Transactions on Automatic Control|volume=35|issue=3|pages=356–361|doi=10.1109/9.50357}}</ref> [[finance]],<ref>[http://books.google.it/books?id=p6UHHfkQ9Y8C&lpg=PR11&ots=AqlJfX5Z0X&dq=economics%20robust%20optimization&lr&hl=it&pg=PR11#v=onepage&q&f=false%20 Robust portfolio optimization]</ref> [[logistics]],<ref>{{cite journal|last=Yu|first=Chian-Son|coauthors=Li, Han-Lin|title=A robust optimization model for stochastic logistic problems|journal=International Journal of Production Economics|volume=64|issue=1-3|pages=385–397|doi=10.1016/S0925-5273(99)00074-2}}</ref> [[manufacturing engineering]],<ref>{{cite journal|last=Strano|first=M|title=Optimization under uncertainty of sheet-metal-forming processes by the finite element method|journal=Proceedings of the Institution of Mechanical Engineers, Part B: Journal of Engineering Manufacture|volume=220|issue=8|pages=1305–1315|doi=10.1243/09544054JEM480}}</ref> [[chemical engineering]],<ref>{{cite journal|last=Bernardo|first=Fernando P.|coauthors=Saraiva, Pedro M.|title=Robust optimization framework for process parameter and tolerance design|journal=AIChE Journal|year=1998|volume=44|issue=9|pages=2007–2017|doi=10.1002/aic.690440908}}</ref> [[medicine]],<ref>{{cite journal|last=Chu|first=Millie|coauthors=Zinchenko, Yuriy; Henderson, Shane G; Sharpe, Michael B|title=Robust optimization for intensity modulated radiation therapy treatment planning under uncertainty|journal=Physics in Medicine and Biology|year=2005|volume=50|issue=23|pages=5463–5477|doi=10.1088/0031-9155/50/23/003}}</ref> and [[computer science]]. In [[engineering]] problems, these formulations often take the name of "Robust Design Optimization", RDO or "Reliability Based Design Optimization", RBDO.
| |
| | |
| == Example 1==
| |
| | |
| Consider the simple [[linear programming]] problem
| |
| | |
| :<math> \max_{x,y} \ \{3x + 2y\} \ \ \mathrm { subject \ to }\ \ x,y\ge 0; cx + dy \le 10, \forall (c,d)\in P </math> | |
| where <math>P</math> is a given subset of <math>\mathbb{R}^{2}</math>.
| |
| | |
| What makes this a 'robust optimization' problem is the <math>\forall (c,d)\in P</math> clause in the constraints. Its implication is that for a pair <math>(x,y)</math> to be admissible, the constraint <math>cx + dy \le 10</math> must be satisfied by the '''worst''' <math>(c,d)\in P</math> pertaining to <math>(x,y)</math>, namely the pair <math>(c,d)\in P</math> that maximizes the value of <math>cx + dy</math> for the given value of <math>(x,y)</math>.
| |
| | |
| If the parameter space <math>P</math> is finite (consisting of finitely many elements), then this robust optimization problem itself is a [[linear programming]] problem: for each <math>(c,d)\in P</math> there is a linear constraint <math>cx + dy \le 10</math>.
| |
| | |
| If <math>P</math> is not a finite set, then this problem is a linear [[semi-infinite programming]] problem, namely a linear programming problem with finitely many (2) decision variables and infinitely many constraints.
| |
| | |
| == Classification ==
| |
| There are a number of classification criteria for robust optimization problems/models. In particular, one can distinguish between problems dealing with '''local''' and '''global''' models of robustness; and between '''probabilistic''' and '''non-probabilistic''' models of robustness. Modern robust optimization deals primarily with non-probabilistic models of robustness that are [[worst case]] oriented and as such usually deploy [[Wald's maximin model]]s.
| |
| | |
| === Local robustness ===
| |
| | |
| There are cases where robustness is sought against small perturbations in a nominal value of a parameter. A very popular model of local robustness is the [[stability radius|radius of stability]] model:
| |
| | |
| : <math>\hat{\rho}(x,\hat{u}):= \max_{\rho\ge 0}\ \{\rho: u\in S(x), \forall u\in B(\rho,\hat{u})\}</math>
| |
| | |
| where <math>\hat{u}</math> denotes the nominal value of the parameter, <math>B(\rho,\hat{u})</math> denotes a ball of radius <math>\rho</math> centered at <math>\hat{u}</math> and <math>S(x)</math> denotes the set of values of <math>u</math> that satisfy given stability/performance conditions associated with decision <math>x</math>.
| |
| | |
| In words, the robustness (radius of stability) of decision <math>x</math> is the radius of the largest ball centered at <math>\hat{u}</math> all of whose elements satisfy the stability requirements imposed on <math>x</math>. The picture is this:
| |
| | |
| [[Image:Local robustness.png|500px]]
| |
| | |
| where the rectangle <math>U(x)</math> represents the set of all the values <math>u</math> associated with decision <math>x</math>.
| |
| | |
| === Global robustness ===
| |
| | |
| Consider the simple abstract robust optimization problem
| |
| | |
| : <math>\max_{x\in X}\ \{f(x): g(x,u)\le b, \forall u\in U\}</math>
| |
| | |
| where <math>U</math> denotes the set of all ''possible'' values of <math>u</math> under consideration.
| |
| | |
| This is a ''global'' robust optimization problem in the sense that the robustness constraint <math>g(x,u)\le b, \forall u\in U</math> represents all the ''possible'' values of <math>u</math>.
| |
| | |
| The difficulty is that such a "global" constraint can be too demanding in that there is no <math>x\in X</math> that satisfies this constraint. But even if such an <math>x\in X</math> exists, the constraint can be too "conservative" in that it yields a solution <math>x\in X</math> that generates a very small payoff <math>f(x)</math> that is not representative of the performance of other decisions in <math>X</math>. For instance, there could be an <math>x'\in X</math> that only slightly violates the robustness constraint but yields a very large payoff <math>f(x')</math>. In such cases it might be necessary to relax a bit the robustness constraint and/or modify the statement of the problem.
| |
| | |
| ==== Example 2====
| |
| Consider the case where the objective is to satisfy a constraint <math>g(x,u)\le b,</math>. where <math>x\in X</math> denotes the decision variable and <math>u</math> is a parameter whose set of possible values in <math>U</math>. If there is no <math>x\in X</math> such that <math>g(x,u)\le b,\forall u\in U</math>, then the following intuitive measure of robustness suggests itself:
| |
| | |
| : <math>\rho(x):= \max_{Y\subseteq U} \ \{size(Y): g(x,u)\le b, \forall u\in Y\} \ , \ x\in X</math>
| |
| | |
| where <math>size(Y)</math> denotes an appropriate measure of the "size" of set <math>Y</math>. For example, if <math>U</math> is a finite set, then <math>size(Y)</math> could be defined as the [[cardinality]] of set <math>Y</math>.
| |
| | |
| In words, the robustness of decision is the size of the largest subset of <math>U</math> for which the constraint <math>g(x,u)\le b</math> is satisfied for each <math>u</math> in this set. An optimal decision is then a decision whose robustness is the largest.
| |
| | |
| This yields the following robust optimization problem:
| |
| | |
| : <math>\max_{x\in X, Y\subseteq U} \ \{size(Y): g(x,u) \le b, \forall u\in Y\}</math>
| |
| | |
| This intuitive notion of global robustness is not used often in practice because the robust optimization problems that it induces are usually (not always) very difficult to solve.
| |
| | |
| ====Example 3====
| |
| Consider the robust optimization problem
| |
| :<math>z(U):= \max_{x\in X}\ \{f(x): g(x,u)\le b, \forall u\in U\}</math>
| |
| where <math>g</math> is a real-valued function on <math>X\times U</math>, and assume that there is no feasible solution to this problem because the robustness constraint <math>g(x,u)\le b, \forall u\in U</math> is too demanding.
| |
| | |
| To overcome this difficult, let <math>\mathcal{N}</math> be a relatively small subset of <math>U</math> representing "normal" values of <math>u</math> and consider the following robust optimization problem:
| |
| :<math>z(\mathcal{N}):= \max_{x\in X}\ \{f(x): g(x,u)\le b, \forall u\in \mathcal{N}\}</math>
| |
| | |
| Since <math>\mathcal{N}</math> is much smaller than <math>U</math>, its optimal solution may not perform well on a large portion of <math>U</math> and therefore may not be robust against the variability of <math>u</math> over <math>U</math>.
| |
| | |
| One way to fix this difficulty is to relax the constraint <math>g(x,u)\le b</math> for values of <math>u</math> outside the set <math>\mathcal{N}</math> in a controlled manner so that larger violations are allowed as the distance of <math>u</math> from <math>\mathcal{N}</math> increases. For instance, consider the relaxed robustness constraint
| |
| : <math>g(x,u) \le b + \beta \cdot dist(u,\mathcal{N}) \ , \ \forall u\in U</math>
| |
| | |
| where <math>\beta \ge 0</math> is a control parameter and <math>dist(u,\mathcal{N})</math> denotes the distance of <math>u</math> from <math>\mathcal{N}</math>. Thus, for <math>\beta =0</math> the relaxed robustness constraint reduces back to the original robustness constraint.
| |
| This yields the following (relaxed) robust optimization problem:
| |
| | |
| :<math>z(\mathcal{N},U):= \max_{x\in X}\ \{f(x): g(x,u)\le b + \beta \cdot dist(u,\mathcal{N}) \ , \ \forall u\in U\}</math>
| |
| | |
| The function <math>dist</math> is defined in such a manner that
| |
| :<math>dist(u,\mathcal{N})\ge 0,\forall u\in U</math>
| |
| | |
| and
| |
|
| |
| : <math>dist(u,\mathcal{N})= 0,\forall u\in \mathcal{N}</math>
| |
| | |
| and therefore the optimal solution to the relaxed problem satisfies the original constraint <math>g(x,u)\le b</math> for all values of <math>u</math> in <math>\mathcal{N}</math>. In addition, it also satisfies the relaxed constraint
| |
| : <math>g(x,u)\le b + \beta \cdot dist(u,\mathcal{N})</math>
| |
| | |
| outside <math>\mathcal{N}</math>.
| |
| | |
| ===Non-probabilistic robust optimization models===
| |
| | |
| The dominating paradigm in this area of robust optimization is [[Wald's maximin model]], namely
| |
| | |
| : <math>\max_{x\in X}\min_{u\in U(x)} f(x,u)</math>
| |
| | |
| where the <math>\max</math> represents the decision maker, the <math>\min</math> represents Nature, namely [[uncertainty]], <math>X</math> represents the decision space and <math>U(x)</math> denotes the set of possible values of <math>u</math> associated with decision <math>x</math>. This is the ''classic'' format of the generic model, and is often referred to as ''minimax'' or ''maximin'' optimization problem. The non-probabilistic ('''deterministic''') model has been and is being extensively used for robust optimization especially in the field of signal processing.<ref>S. Verdu and H. V. Poor (1984), "On Minimax Robustness: A general approach and applications," IEEE Transactions on Information Theory, vol. 30, pp. 328–340, March 1984.</ref><ref>S. A. Kassam and H. V. Poor (1985), "Robust Techniques for Signal Processing: A Survey," Proceedings of the IEEE, vol. 73, pp. 433–481, March 1985.</ref><ref>M. Danish Nisar. [http://www.shaker.eu/shop/978-3-8440-0332-1 "Minimax Robustness in Signal Processing for Communications"], Shaker Verlag, ISBN 978-3-8440-0332-1, August 2011.</ref>
| |
| | |
| The equivalent [[mathematical programming]] (MP) of the classic format above is
| |
| | |
| :<math>\max_{x\in X,v\in \mathbb{R}} \ \{v: v\le f(x,u), \forall u\in U(x)\}</math>
| |
| | |
| Constraints can be incorporated explicitly in these models. The generic constrained classic format is
| |
| | |
| : <math>\max_{x\in X}\min_{u\in U(x)} \ \{f(x,u): g(x,u)\le b,\forall u\in U(x)\}</math>
| |
| | |
| The equivalent constrained MP format is
| |
| | |
| :<math>\max_{x\in X,v\in \mathbb{R}} \ \{v: v\le f(x,u), g(x,u)\le b, \forall u\in U(x)\}</math>
| |
| | |
| | |
| ===Probabilistic robust optimization models===
| |
| These models quantify the uncertainty in the "true" value of the parameter of interest by probability distribution functions. They have been traditionally classified as [[stochastic programming]] and [[stochastic optimization]] models.
| |
| | |
| == See also ==
| |
| * [[Stability radius]]
| |
| * [[Minimax]]
| |
| * [[Minimax estimator]]
| |
| * [[Minimax regret]]
| |
| * [[Robust statistics]]
| |
| * [[Robust decision making]]
| |
| * [[Stochastic programming]]
| |
| * [[Stochastic optimization]]
| |
| * [[Info-gap decision theory]]
| |
| * [[Probabilistic-based design optimization]]
| |
| * [[Taguchi methods]]
| |
| | |
| == References ==
| |
| | |
| {{Reflist}}
| |
| | |
| ==External links==
| |
| * [http://www.robustopt.com ROME: Robust Optimization Made Easy]
| |
| * [http://robust.moshe-online.com: Robust Decision-Making Under Severe Uncertainty]
| |
| | |
| == Bibliography ==
| |
| *H.J. Greenberg. Mathematical Programming Glossary. World Wide Web, http://glossary.computing.society.informs.org/, 1996-2006. Edited by the INFORMS Computing Society.
| |
| *Ben-Tal, A., Nemirovski, A. (1998). Robust Convex Optimization. ''Mathematics of Operations Research 23,'' 769-805.
| |
| *Ben-Tal, A., Nemirovski, A. (1999). Robust solutions to uncertain linear programs. ''Operations Research Letters 25,'' 1-13.
| |
| *Ben-Tal, A. and Arkadi Nemirovski, A. (2002). Robust optimization—methodology and applications, ''Mathematical Programming, Series B 92,'' 453-480.
| |
| *Ben-Tal A., El Ghaoui, L. and Nemirovski, A. (2006). ''Mathematical Programming, Special issue on Robust Optimization,'' Volume 107(1-2).
| |
| *Ben-Tal A., El Ghaoui, L. and Nemirovski, A. (2009). Robust Optimization. ''Princeton Series in Applied Mathematics,'' Princeton University Press.
| |
| *Bertsimas, D. and M. Sim. (2003). Robust Discrete Optimization and Network Flows. ''Mathematical Programming,'' 98, 49-71.
| |
| *Bertsimas, D. and M. Sim. (2006). Tractable Approximations to Robust Conic Optimization Problems Dimitris Bertsimas. '' Mathematical Programming, '' 107(1), 5 – 36.
| |
| *Chen, W. and M. Sim. (2009). Goal Driven Optimization. ''Operations Research.'' 57(2), 342-357.
| |
| *Chen, X., M. Sim, P. Sun and J. Zhang. (2008). A Linear-Decision Based Approximation Approach to Stochastic Programming. '' Operations Research '' 56(2), 344-357.
| |
| *Chen, X., M. Sim and P. Sun (2007). A Robust Optimization Perspective on Stochastic Programming. '' Operations Research, '' 55(6), 1058-1071.
| |
| *Dembo, R. (1991). Scenario optimization, ''Annals of Operations Research,'' 30(1), 63-80.
| |
| *Gupta, S.K. and Rosenhead, J. (1968). Robustness in sequential investment decisions, ''Management science,'' 15(2), B-18-29.
| |
| *Kouvelis P. and Yu G. (1997). ''Robust Discrete Optimization and Its Applications,'' Kluwer.
| |
| *Mutapcic, Almir and Boyd, Stephen. (2009). Cutting-set methods for robust convex optimization with pessimizing oracles, ''Optimization Methods and Software,'' 24(3), 381-406.
| |
| *Mulvey, J.M., Vanderbei, R.J., Zenios, S.A. (1995). Robust Optimization of Large-Scale Systems
| |
| ''Operations Research,'' 43(2),264-281.
| |
| *Rosenblat, M.J. (1987). A robust approach to facility design. ''International Journal of Production Research,'' 25(4), 479-486.
| |
| *Rosenhead M.J, Elton M, Gupta S.K. (1972). Robustness and Optimality as Criteria for Strategic Decisions. ''Operational Research Quarterly,'' 23(4), 413-430.
| |
| *Rustem B. and Howe M.(2002). ''Algorithms for Worst-case Design and Applications to Risk Management,'' Princeton University Press.
| |
| *Sniedovich, M. (2007). The art and science of modeling decision-making under severe uncertainty, ''Decision Making in Manufacturing and Services,'' 1(1-2), 111-136.
| |
| *Sniedovich, M. (2008). Wald's Maximin Model: a Treasure in Disguise!, ''Journal of Risk Finance,'' 9(3), 287-291.
| |
| *Sniedovich, M. (2010). A bird's view of info-gap decision theory, ''Journal of Risk Finance,'' 11(3), 268-283.
| |
| *Wald, A. (1939). Contributions to the theory of statistical estimation and testing hypotheses, ''The Annals of Mathematics,'' 10(4), 299-326.
| |
| *Wald, A. (1945). Statistical decision functions which minimize the maximum risk, ''The Annals of Mathematics,'' 46(2), 265-280.
| |
| *Wald, A. (1950). ''Statistical Decision Functions,'' John Wiley, NY.
| |
| | |
| [[Category:Mathematical optimization]]
| |