Forms of energy: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Dolphin51
m Reverted vandalism by 218.103.233.36 (talk)
en>Haminoon
m Reverted edits by 71.231.152.26 (talk) (HG)
 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
In [[theoretical computer science]], the '''algorithmic Lovász local lemma''' gives an algorithmic way of constructing objects that obey a system of constraints with limited dependence.
Hi, everybody! My name is Lucile. <br>It is a little about myself: I live in Poland, my city of Warszawa. <br>It's called often Northern or cultural capital of . I've married 1 years ago.<br>I have two children - a son (Margot) and the daughter (Jacklyn). We all like Slot Car Racing.<br><br>Also visit my web site - [http://netblogger.de?wptouch_switch=desktop&redirect=%2F%3Fp%3D59698 я читал этот]
 
Given a finite set of ''bad'' events {''A''<sub>1</sub>, ..., ''A<sub>n</sub>''} in a probability space with limited dependence amongst the ''A<sub>i</sub>''s and with specific bounds on their respective probabilities, the [[Lovász local lemma]] proves that with non-zero probability all of these events can be avoided. However, the lemma is non-constructive in that it does not provide any insight on ''how'' to avoid the bad events.
 
If the events  {''A''<sub>1</sub>, ..., ''A<sub>n</sub>''} are determined by a finite collection of mutually independent random variables, a simple [[Las Vegas algorithm]] with [[ZPP (complexity)|expected polynomial runtime]] proposed by [[Robin Moser]] and [[Gábor Tardos]]<ref name="moser:arxiv09">{{cite arxiv|first1=Robin A.|last1=Moser|first2=Gabor|last2=Tardos|eprint=0903.0544|title=A constructive proof of the general Lovász Local Lemma|year=2009}}.</ref> can compute an assignment to the random variables such that all events are avoided.
 
==Review of Lovász local lemma==
{{main|Lovász local lemma}}
 
The Lovász Local Lemma is a powerful tool commonly used in the [[probabilistic method]] to prove the existence of certain complex mathematical objects with a set of prescribed features. A typical proof proceeds by operating on the complex object in a random manner and uses the Lovász Local Lemma to bound the probability that any of the features is missing. The absence of a feature is considered a ''bad event'' and if it can be shown that all such bad events can be avoided simultaneously with non-zero probability, the existence follows. The lemma itself reads as follows:
 
<blockquote>Let <math>\mathcal{A} = \{ A_1, \ldots, A_n \}</math> be a finite set of events in the probability space Ω. For <math> A \in \mathcal{A} </math> let <math> \Gamma(A)</math> denote a subset of <math>\mathcal{A}</math> such that ''A'' is independent from the collection of events <math>\mathcal{A} \setminus (\{A \} \cup \Gamma(A))</math>. If there exists an assignment of reals <math> x : \mathcal{A} \rightarrow (0,1) </math> to the events such that
 
:<math> \forall A \in \mathcal{A} : \Pr[A] \leq x(A) \prod\nolimits_{B \in \Gamma(A)} (1-x(B)) </math>
 
then the probability of avoiding all events in <math> \mathcal{A} </math> is positive, in particular
 
:<math> \Pr\left[\,\overline{A_1} \wedge \cdots \wedge \overline{A_n}\,\right] \geq \prod\nolimits_{A \in \mathcal{A}} (1-x(A)). </math></blockquote>
 
==Algorithmic version of the Lovász local lemma==
The Lovász Local Lemma is non-constructive because it only allows us to conclude the existence of structural properties or complex objects but does not indicate how these can be found or constructed efficiently in practice. Note that random sampling from the probability space Ω is likely to be inefficient, since the probability of the event of interest
 
:<math> \Pr \left[ \overline{A_1} \wedge \ldots \wedge \overline{A_n} \right]</math>
 
is only bounded by a product of small numbers
 
:<math> \prod\nolimits_{A \in \mathcal{A}} (1-x(A)) </math>
 
and therefore likely to be very small.
 
Under the assumption that all of the events in <math> \mathcal{A} </math> are determined by a finite collection of mutually independent [[Random Variable|random variables]] <math> \mathcal{P} </math> in Ω, [[Robin Moser]] and [[Gábor Tardos]] proposed an efficient randomized algorithm that computes an assignment to the random variables in <math> \mathcal{P} </math> such that all events in <math> \mathcal{A} </math> are avoided.
 
Hence, this algorithm can be used to efficiently construct witnesses of complex objects with prescribed features for most problems to which the Lovász Local Lemma applies.
 
===History===
Prior to the recent work of Moser and Tardos, earlier work had also made progress in developing algorithmic versions of the Lovász Local Lemma.  [[József Beck]] in 1991 first gave proof that an algorithmic version was possible.<ref name=beck:rsa91>{{citation|doi=10.1002/rsa.3240020402|first=József|last=Beck|authorlink=József Beck|title=An algorithmic approach to the Lovász Local Lemma. I|journal=Random Structures and Algorithms|volume=2|issue=4|pages=343–366|year=1991}}.</ref> In this breakthrough result, a stricter requirement was imposed upon the problem formulation than in the original non-constructive definition.  Beck's approach required that for each <math>A \in \mathcal{A}</math>, the number of dependencies of ''A'' was bounded above with <math>|\Gamma(A)| < 2^{\frac{k}{48}}</math> (approximately). The existential version of the Local Lemma permits a larger upper bound on dependencies:
 
:<math>|\Gamma(A)| < \frac{2^{k}}{e},</math>
 
This bound is known to be tight.  Since the initial algorithm, work has been done to push algorithmic versions of the Local Lemma closer to this tight value.  Moser and Tardos's recent work are the most recent in this chain, and provide an algorithm that achieves this tight bound.
 
===Algorithm===
Let us first introduce some concepts that are used in the algorithm.
 
For any random variable <math> P \in \mathcal{P}, v_P </math> denotes the current assignment (evaluation) of ''P''. An assignment (evaluation) to all random variables is denoted <math> (v_P)_{\mathcal{P}}</math>.
 
The unique minimal subset of random variables in <math> \mathcal{P} </math> that determine the event ''A'' is denoted by vbl(''A'').
 
If the event ''A'' is true under an evaluation <math> (v_P)_{\mathcal{P}}</math>, we say that <math> (v_P)_{\mathcal{P}}</math> '''satisfies''' ''A'', otherwise it '''avoids''' ''A''.
 
Given a set of bad events <math> \mathcal{A} </math> we wish to avoid that is determined by a collection of mutually independent random variables <math> \mathcal{P} </math>, the algorithm proceeds as follows:
 
# <math> \forall P \in \mathcal{P} </math>: <math> v_p \leftarrow </math> a random evaluation of P
# '''while''' <math> \exists A \in \mathcal{A}</math> such that A is satisfied by <math> (v_P)_{\mathcal{P}}</math>
#* pick an arbitrary satisfied event <math> A \in \mathcal{A}</math>
#* <math> \forall P \in \text{vbl}(A) </math>: <math> v_p \leftarrow </math> a new random evaluation of P
# '''return''' <math> (v_P)_{\mathcal{P}}</math>
 
In the first step, the algorithm randomly initializes the current assignment ''v<sub>P</sub>'' for each random variable <math> P \in \mathcal{P}</math>. This means that an assignment ''v<sub>P</sub>'' is sampled randomly and independently according to the distribution of the random variable ''P''.
 
The algorithm then enters the main loop which is executed until all events in <math> \mathcal{A} </math> are avoided and which point the algorithm returns the current assignment. At each iteration of the main loop, the algorithm picks an arbitrary satisfied event ''A'' (either randomly or deterministically) and resamples all the random variables that determine ''A''.
 
===Main theorem===
Let <math> \mathcal{P} </math> be a finite set of mutually independent random variables in the probability space Ω. Let <math> \mathcal{A} </math> be a finite set of events determined by these variables. If there exists an assignment of reals <math> x : \mathcal{A} \to (0,1) </math> to the events such that
 
:<math> \forall A \in \mathcal{A} : \Pr[A] \leq x(A) \prod\nolimits_{B \in \Gamma(A)} (1-x(B)) </math>
 
then there exists an assignment of values to the variables <math>\mathcal{P}</math> avoiding all of the events in <math> \mathcal{A} </math>.
 
Moreover, the randomized algorithm described above resamples an event <math> A \in \mathcal{A} </math> at most an expected
 
:<math> \frac{x(A)}{1-x(A)}</math>
 
times before it finds such an evaluation. Thus the expected total number of resampling steps and therefore the expected runtime of the algorithm is at most
 
:<math> \sum_{A \in \mathcal{A}} \frac{x(A)}{1-x(A)}.</math>
 
The proof of this theorem can be found in the paper by Moser and Tardos <ref name="moser:arxiv09"/>
 
===Symmetric version===
The requirement of an assignment function ''x'' satisfying a set of inequalities in the theorem above is complex and not intuitive. But this requirement can be replaced by three simple conditions:
* <math> \forall A \in \mathcal{A}: \Gamma(A) \leq D </math>, i.e. each event ''A'' depends on at most ''D'' other events,
* <math> \forall A \in \mathcal{A}: \Pr[A] \leq  p </math>, i.e. the probability of each event ''A'' is at most ''p'',
* <math> e p (D+1) \leq 1 </math>, where ''e'' is the [[e (mathematical constant)|base of the natural logarithm]].
 
The version of the Lovász Local Lemma with these three conditions instead of the assignment function ''x'' is called the ''Symmetric Lovász Local Lemma''.  We can also state the ''Symmetric Algorithmic Lovász Local Lemma'':
 
Let <math>\mathcal{P}</math> be a finite set of mutually independent random variables and <math>\mathcal{A}</math> be a finite set of events determined by these variables as before. If the above three conditions hold then there exists an assignment of values to the variables <math>\mathcal{P}</math> avoiding all of the events in <math>\mathcal{A} </math>.
 
Moreover, the randomized algorithm described above resamples an event <math> A \in \mathcal{A} </math> at most an expected <math>\frac{1}{D}</math> times before it finds such an evaluation. Thus the expected total number of resampling steps and therefore the expected runtime of the algorithm is at most <math>\frac{n}{D}</math>.
 
==Example==
The following example illustrates how the algorithmic version of the Lovász Local Lemma can be applied to a simple problem.
 
Let Φ be a [[Conjunctive normal form|CNF]] formula over variables ''X''<sub>1</sub>, ..., ''X<sub>n</sub>'', containing ''n'' clauses, and with at least ''k'' [[Literal (mathematical logic)|literals]] in each clause, and with each variable ''X<sub>i</sub>'' appearing in at most <math>\frac{2^k}{ke} </math> clauses. Then, Φ is satisfiable.
 
This statement can be proven easily using the symmetric version of the Algorithmic Lovász Local Lemma. Let ''X''<sub>1</sub>, ..., ''X<sub>n</sub>'' be the set of mutually independent random variables <math> \mathcal{P} </math> which are sampled [[Uniform distribution (discrete)|uniformly at random]].
 
Firstly, we truncate each clause in Φ to contain exactly ''k'' literals. Since each clause is a disjunction, this does not harm satisfiability, for if we can find a satisfying assignment for the truncated formula, it can easily be extended to a satisfying assignment for the original formula by reinserting the truncated literals.
 
Now, define a bad event ''A<sub>j</sub>'' for each clause in Φ, where ''A<sub>j</sub>'' is the event that clause ''j'' in Φ is unsatisfied by the current assignment. Since each clause contains ''k'' literals (and therefore ''k'' variables) and since all variables are sampled uniformly at random, we can bound the probability of each bad event by
 
:<math>\Pr[A_j] = p = 2^{-k}.</math>
 
Since each variable can appear in at most <math> \frac{2^k}{ke}</math> clauses and there are ''k'' variables in each clause, each bad event ''A<sub>j</sub>'' can depend on at most
 
:<math> D = k\left(\frac{2^k}{ke}-1\right) \leq \frac{2^k}{e} -1 </math>
 
other events. Therefore:
 
:<math>D+1 \leq \frac{2^k}{e},</math>
 
multiplying both sides by ''ep'' we get:
 
:<math> ep(D+1) \leq e 2^{-k} \frac{2^k}{e} = 1 </math>
 
it follows by the symmetric Lovász Local Lemma that the probability of a random assignment to ''X''<sub>1</sub>, ..., ''X<sub>n</sub>'' satisfying all clauses in Φ is non-zero and hence such an assignment must exist.
 
Now, the Algorithmic Lovász Local Lemma actually allows us to efficiently compute such an assignment by applying the algorithm described above. The algorithm proceeds as follows:
 
It starts with a random [[truth value]] assignment to the variables ''X''<sub>1</sub>, ..., ''X<sub>n</sub>'' sampled uniformly at random. While there exists a clause in Φ that is unsatisfied, it randomly picks an unsatisfied clause ''C'' in Φ and assigns a new truth value to all variables that appear in ''C'' chosen uniformly at random. Once all clauses in Φ are satisfied, the algorithm returns the current assignment.
 
This algorithm is in fact identical to [[WalkSAT]] which is used to solve general [[boolean satisfiability problem]]s. Hence, the Algorithmic Lovász Local Lemma proves that [[WalkSAT]] has an expected runtime of at most
 
:<math> \frac{n}{\frac{2^k}{e}-k} </math>
 
steps on CNF formulas that satisfy the two conditions above.
 
A stronger version of the above statement is proven by Moser,<ref>{{cite arxiv|title=A constructive proof of the Lovász Local Lemma|first=Robin A.|last=Moser|year=2008|eprint=0810.4812}}.</ref> see also Berman, Karpinski and Scott.<ref>Piotr Berman, Marek Karpinski and Alexander D. Scott, Approximation Hardness and Satisfiability of Bounded Occurrence Instances of SAT
], [http://eccc.hpi-web.de/report/2003/022/ ECCC TR 03-022(2003)].</ref>
 
==Applications==
As mentioned before, the Algorithmic Version of the Lovász Local Lemma applies to most problems for which the general Lovász Local Lemma is used as a proof technique. Some of these problems are discussed in the following articles:
 
* [[Probabilistic proofs of non-probabilistic theorems]]
* [[Random graph]]
 
==Parallel version==
The algorithm described above lends itself well to parallelization, since resampling two independent events <math> A,B \in \mathcal{A}</math>, i.e. <math> \text{vbl}(A) \cap \text{vbl}(B) = \emptyset </math>, in parallel is equivalent to resampling ''A'', ''B'' sequentially. Hence, at each iteration of the main loop one can determine the maximal set of independent and satisfied events ''S'' and resample all events in ''S'' in parallel.
 
Under the assumption that the assignment function ''x'' satisfies the slightly stronger conditions:
 
:<math> \forall A \in \mathcal{A} : \Pr[A] \leq (1 - \varepsilon) x(A) \prod\nolimits_{B \in \Gamma(A)} (1-x(B)) </math>
 
for some ε > 0 Moser and Tardos proved that the parallel algorithm achieves a better runtime complexity. In this case, the parallel version of the algorithm takes an expected
 
:<math> O\left(\frac{1}{\varepsilon} \log \sum_{A \in \mathcal{A}} \frac{x(A)}{1-x(A)}\right) </math>
 
steps before it terminates.  The parallel version of the algorithm can be seen as a special case of the sequential algorithm shown above, and so this result also holds for the sequential case.
 
==References==
{{reflist}}
 
{{DEFAULTSORT:Algorithmic Lovasz local lemma}}
[[Category:Probability theorems]]
[[Category:Combinatorics]]
[[Category:Lemmas]]

Latest revision as of 09:52, 9 December 2014

Hi, everybody! My name is Lucile.
It is a little about myself: I live in Poland, my city of Warszawa.
It's called often Northern or cultural capital of . I've married 1 years ago.
I have two children - a son (Margot) and the daughter (Jacklyn). We all like Slot Car Racing.

Also visit my web site - я читал этот