Non-squeezing theorem: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Chris Howard
specified: 2009 Abel Prize winner Mikhail Gromov
 
en>Addbot
m Bot: Migrating 1 interwiki links, now provided by Wikidata on d:q3527214
Line 1: Line 1:
'''Learning with errors (LWE)''' is a problem in [[machine learning]] that is conjectured to be hard to solve. It is a generalization of the [[parity learning]] problem, introduced<ref name="regev05" /> by [[Oded Regev]] in 2005. Regev showed, furthermore, that the LWE problem is as hard to solve as several worst-case [[lattice problems]]. The LWE problem has recently<ref name="regev05">Oded Regev, “On lattices, learning with errors, random linear codes, and cryptography,” in Proceedings of the thirty-seventh annual ACM symposium on Theory of computing (Baltimore, MD, USA: ACM, 2005), 84-93, http://portal.acm.org/citation.cfm?id=1060590.1060603.</ref><ref name="peikert09">Chris Peikert, “Public-key cryptosystems from the worst-case shortest vector problem: extended abstract,” in Proceedings of the 41st annual ACM symposium on Theory of computing (Bethesda, MD, USA: ACM, 2009), 333-342, http://portal.acm.org/citation.cfm?id=1536414.1536461.</ref> been used as a [[Computational hardness assumption|hardness assumption]] to create [[Public-key cryptography|public-key cryptosystems]].


An algorithm is said to solve the LWE problem if, when given access to samples <math>(x,y)</math> where <math>x\in \mathbb{Z}_q^n</math> and <math>y \in \mathbb{Z}_q</math>, with the assurance, for some fixed [[linear function]] <math>f:\mathbb{Z}_q^n \rightarrow \mathbb{Z}_q,</math> that <math>y=f(x)</math> with high probability and deviates from it according to some known noise model, the algorithm can recreate <math>f</math> or some close approximation of it with high probability.


It depends on the quality of the Wordpress theme but even if it's not a professional one you will be able to average 50-60$ EACH link. You can either install Word - Press yourself or use free services offered on the web today. A pinch of tablet centric strategy can get your Word - Press site miles ahead of your competitors, so here are few strategies that will give your Wordpress websites and blogs an edge over your competitors:. s and intelligently including a substantial amount of key words in the title tags, image links, etc. This particular wordpress plugin is essential for not only having the capability where you improve your position, but to enhance your organic searches for your website. <br><br>Choosing what kind of links you'll be using is a ctitical aspect of any linkwheel strategy, especially since there are several different types of links that are assessed by search engines. Best of all, you can still have all the functionality that you desire when you use the Word - Press platform. This may possibly also permit it currently being seriously straightforward to modify the hues within your Ad - Sense code so the ads blend nicely with the many term broad internet word wide web web page in case you can come to your conclusion to run the adverts. Apart from these, you are also required to give some backlinks on other sites as well. But in case you want some theme or plugin in sync with your business needs, it is advisable that you must seek some professional help. <br><br>You can down load it here at this link:  and utilize your FTP software program to upload it to your Word - Press Plugin folder. It was also the very first year that the category of Martial Arts was included in the Parents - Connect nationwide online poll, allowing parents to vote for their favorite San Antonio Martial Arts Academy. I hope this short Plugin Dynamo Review will assist you to differentiate whether Plugin Dynamo is Scam or a Genuine. User friendly features and flexibility that Word - Press has to offer is second to none. For any web design and development assignment, this is definitely one of the key concerns, specifically for online retail outlets as well as e-commerce websites. <br><br>You can add keywords but it is best to leave this alone. High Quality Services: These companies help you in creating high quality Word - Press websites. One of the great features of Wordpress is its ability to integrate SEO into your site.  If you beloved this post and you would like to acquire additional information relating to [http://mmservice.dk/wordpress_backup_plugin_131903 wordpress dropbox backup] kindly check out our web-page. It supports backup scheduling and allows you to either download the backup file or email it to you. Fortunately, Word - Press Customization Service is available these days, right from custom theme design, to plugin customization and modifying your website, you can take any bespoke service for your Word - Press development project. <br><br>Yet, overall, less than 1% of websites presently have mobile versions of their websites. As a website owner, you can easily manage CMS-based website in a pretty easy and convenient style. Just download it from the website and start using the same. This is because of the customization that works as a keystone for a SEO friendly blogging portal website. Likewise, professional publishers with a multi author and editor setup often find that Word - Press lack basic user and role management capabilities.
== Definition ==
Denote by <math>\mathbb{T}=\mathbb{R}/\mathbb{Z}</math> the additive group on reals modulo one. Denote by <math>A_{\mathbf{s},\phi}</math> the distribution on <math>\mathbb{Z}_q^n \times \mathbb{T}</math> obtained by choosing a vector <math>\mathbf{a}\in \mathbb{Z}_q^n</math> uniformly at random, choosing <math>e</math> according to a probability distribution <math>\phi</math>  on <math>\mathbb{T}</math> and outputting <math>(\mathbf{a},\langle \mathbf{a},\mathbf{s} \rangle /q + e)</math> for some fixed vector <math>\mathbf{s} \in \mathbb{Z}_q^n</math> where the division is done in the [[field of reals]], and the addition in <math>\mathbb{T}</math>.
 
The learning with errors problem '''<math>LWE_{q,\phi}</math>''' is to find <math>\mathbf{s} \in \mathbb{Z}_q^n</math>, given access to polynomially many samples of choice from <math>A_{\mathbf{s},\phi}</math>.
 
For every <math>\alpha > 0</math>, denote by <math>D_\alpha</math> the one-dimensional [[Normal distribution|Gaussian]] with density function <math>D_\alpha(x)=\rho_\alpha(x)/\alpha</math> where <math>\rho_\alpha(x)=e^{-\pi(|x|/\alpha)^2}</math>, and let <math>\Psi_\alpha</math> be the distribution on <math>\mathbb{T}</math> obtained by considering <math>D_\alpha</math> modulo one.  The version of LWE considered in most of the results would be <math>LWE_{q,\Psi_\alpha}</math>
 
== Decision version ==
 
The '''LWE''' problem described above is the ''search'' version of the problem. In the ''decision'' version ('''DLWE'''), the goal is to distinguish between noisy inner products and uniformly random samples from <math>\mathbb{Z}_q^n \times \mathbb{T}</math> (practically, some discretized version of it). Regev<ref name="regev05" /> showed that the ''decision'' and ''search'' versions are equivalent when <math>q</math> is a prime bounded by some polynomial in <math>n</math>.
 
=== Solving decision assuming search ===
Intuitively, if we have a procedure for the search problem, the decision version can be solved easily: just feed the input samples for the decision problem to the solver for the search problem. Denote the given samples by <math>\{(\mathbf{a_i},\mathbf{b_i})\} \subset \mathbb{Z}^n_q \times \mathbb{T}</math>. If the solver returns a candidate <math>\mathbf{s}</math>, for all <math>i</math>, calculate <math>\{\langle \mathbf{a_i}, \mathbf{s} \rangle - \mathbf{b_i} \} </math>.  If the samples are from an LWE distribution, then the results of this calculation will be distributed according <math>\chi</math>, but if the samples are uniformly random, these quantities will be distributed uniformly as well.
 
=== Solving search assuming decision ===
For the other direction, given a solver for the decision problem, the search version can be solved as follows: Recover <math>\mathbf{s}</math> one coordinate at a time. To obtain the first coordinate, <math>\mathbf{s}_1</math>, make a guess <math>k \in Z_q</math>, and do the following. Choose a number <math>r \in \mathbb{Z}_q</math> uniformly at random. Transform the given samples <math>\{(\mathbf{a_i},\mathbf{b_i})\} \subset \mathbb{Z}^n_q \times \mathbb{T}</math> as follows. Calculate <math>\{(\mathbf{a_i}+(r,0,\ldots,0),\mathbf{b_i}+(r k)/q)\}</math>. Send the transformed samples to the decision solver.
 
If the guess <math>k</math> was correct, the transformation takes the distribution <math>A_{\mathbf{s},\chi}</math> to itself, and otherwise, since <math>q</math> is prime, it takes it to the uniform distribution. So, given a polynomial-time solver for the decision problem that errs with very small probability, since <math>q</math> is bounded by some polynomial in <math>n</math>, it only takes polynomial time to guess every possible value for <math>k</math> and use the solver to see which one is correct. 
 
After obtaining <math>\mathbf{s}_1</math>, we follow an analogous procedure for each other coordinate <math>\mathbf{s}_j</math>.  Namely, we transform our <math>\mathbf{b_i}</math> samples the same way, and transform our <math>\mathbf{a_i}</math> samples by calculating <math>\mathbf{a_i} + (0, \ldots, r, \ldots, 0)</math>, where the <math>r</math> is in the <math>j^{th}</math> coordinate.  <ref name="regev05" />
 
Peikert<ref name="peikert09" /> showed that this reduction, with a small modification, works for any <math>q</math> that is a product of distinct, small (polynomial in <math>n</math>) primes. The main idea is if <math>q = q_1 q_2 \cdots q_t</math>, for each <math>q_{\ell}</math>, guess and check to see if <math>\mathbf{s}_j</math> is congruent to <math>0 \mod q_{\ell}</math>, and then use the [[Chinese remainder theorem]] to recover <math>\mathbf{s}_j</math>.
 
=== Average case hardness ===
Regev<ref name="regev05" /> showed the [[Random self-reducibility]] of the '''LWE''' and '''DLWE''' problems for arbitrary <math>q</math> and <math>\chi</math>. Given samples <math>\{(\mathbf{a_i},\mathbf{b_i})\}</math> from <math>A_{\mathbf{s},\chi}</math>, it is easy to see that <math>\{(\mathbf{a_i},\mathbf{b_i}) + (\langle \mathbf{a_i}, \mathbf{t} \rangle)/q\}</math> are samples from <math>A_{\mathbf{s} + \mathbf{t},\chi}</math>.
 
So, suppose there was some set <math>\mathcal{S} \subset \mathbb{Z}_q^n</math> such that <math>|\mathcal{S}|/|\mathbb{Z}_q^n| = 1/poly(n)</math>, and for distributions <math>A_{\mathbf{s'},\chi}</math>, with <math>\mathbf{s'} \leftarrow \mathcal{S}</math>, '''DLWE''' was easy.
 
Then there would be some distinguisher <math>\mathcal{A}</math>, who, given samples <math>\{(\mathbf{a_i},\mathbf{b_i}) \}</math>, could tell whether they were uniformly random or from <math>A_{\mathbf{s'},\chi}</math>.  If we need to distinguish uniformly random samples from <math>A_{\mathbf{s},\chi}</math>, where <math>\mathbf{s}</math> is chosen uniformly at random from <math>\mathbb{Z}_q^n</math>, we could simply try different values <math>\mathbf{t} </math> sampled uniformly at random from <math>\mathbb{Z}_q^n</math>, calculate <math>\{(\mathbf{a_i},\mathbf{b_i}) + (\langle \mathbf{a_i}, \mathbf{t} \rangle)/q\}</math> and feed these samples to <math>\mathcal{A}</math>.  Since <math>\mathcal{S}</math> comprises a large fraction of <math>\mathbb{Z}_q^n</math>, with high probability, if we choose a polynomial number of values for <math>\mathbf{t}</math>, we will find one such that <math>\mathbf{s} + \mathbf{t} \in \mathcal{S}</math>, and <math>\mathcal{A}</math> will successfully distinguish the samples.
 
Thus, no such <math>\mathcal{S}</math> can exist, meaning '''LWE''' and '''DLWE''' are (up to a polynomial factor) as hard in the average case as they are in the worst case.
 
== Hardness results ==
=== Regev's result ===
For a n-dimensional lattice <math>L</math>, let ''smoothing parameter'' <math>\eta_\epsilon(L)</math> denote the smallest <math>s</math> such that <math>\rho_{1/s}(L^*\setminus \{\mathbf{0}\}) \leq \epsilon </math> where <math>L^*</math> is the dual of <math>L</math> and <math>\rho_\alpha(x)=e^{-\pi(|x|/\alpha)^2}</math> is extended to sets by summing over function values at each element in the set. Let <math>D_{L,r}</math> denote the discrete Gaussian distribution on <math>L</math> of width <math>r</math> for a lattice <math>L</math> and real <math>r>0</math>. The probability of each <math>x \in L</math> is proportional to <math>\rho_r(x)</math>.
 
The ''discrete Gaussian sampling problem''(DGS) is defined as follows: An instance of <math>DGS_\phi</math> is given by an <math>n</math>-dimensional lattice <math>L</math> and a number <math>r \geq \phi(L)</math>. The goal is to output a sample from <math>D_{L,r}</math>. Regev shows that there is a reduction from <math>GapSVP_{100\sqrt{n}\gamma(n)}</math> to <math>DGS_{\sqrt{n}\gamma(n)/\lambda(L^*)}</math> for any function <math>\gamma(n)</math>.
 
Regev then shows that there exists an efficient quantum algorithm for <math>DGS_{\sqrt{2n}\eta_\epsilon(L)/\alpha}</math> given access to an oracle for <math>LWE_{q,\Psi_\alpha}</math> for integer <math>q</math> and <math>\alpha \in (0,1)</math> such that <math>\alpha q > 2\sqrt{n}</math>. This implies the hardness for <math>LWE</math>. Although the proof of this assertion works for any <math>q</math>, for creating a cryptosystem, the <math>q</math> has to be polynomial in <math>n</math>.
 
=== Peikert's result ===
 
Peikert proves<ref name="peikert09" /> that there is a probabilistic polynomial time reduction from the [[Lattice_problems#GapSVP|<math>GapSVP_{\zeta,\gamma}</math>]] problem in the worst case to solving <math>LWE_{q,\Psi_\alpha}</math> using <math>poly(n)</math> samples for parameters <math>\alpha \in (0,1)</math>, <math>\gamma(n)\geq n/(\alpha \sqrt{\log{n}})</math>, <math>\zeta(n) \geq \gamma(n)</math> and <math>q \geq (\zeta/\sqrt{n}) \omega \sqrt{\log{n}})</math>.
 
== Use in Cryptography ==
 
The '''LWE''' problem serves as a versatile problem used in construction of several<ref name="regev05" /><ref name="peikert09" /><ref>Chris Peikert and Brent Waters, “Lossy trapdoor functions and their applications,” in Proceedings of the 40th annual ACM symposium on Theory of computing (Victoria, British Columbia, Canada: ACM, 2008), 187-196, http://portal.acm.org/citation.cfm?id=1374406.</ref><ref> Craig Gentry, Chris Peikert, and Vinod Vaikuntanathan, “Trapdoors for hard lattices and new cryptographic constructions,” in Proceedings of the 40th annual ACM symposium on Theory of computing (Victoria, British Columbia, Canada: ACM, 2008), 197-206, http://portal.acm.org/citation.cfm?id=1374407.</ref> cryptosystems. In 2005, Regev<ref name="regev05" /> showed that the decision version of LWE is hard assuming quantum hardness of the [[lattice problems]] <math>GapSVP_\gamma</math> (for <math>\gamma</math> as above) and <math>SIVP_{t}</math> with t=Õ(n/<math>\alpha</math>). In 2009, Peikert<ref name="peikert09" /> proved a similar result assuming only the classical hardness of the related problem [[Lattice_problems#GapSVP|<math>GapSVP_{\zeta,\gamma}</math>]]. The disadvantage of Peikert's result is that it bases itself on a non-standard version of an easier (when compared to SIVP) problem GapSVP.
 
=== Public-key cryptosystem ===
Regev<ref name="regev05" /> proposed a [[public-key cryptosystem]] based on the hardness of the '''LWE''' problem. The cryptosystem as well as the proof of security and correctness are completely classical. The system is characterized by <math>m,q</math> and a probability distribution <math>\chi</math> on <math>\mathbb{T}</math>. The setting of the parameters used in proofs of correctness and security is
* <math>q \geq 2 </math>, a prime number between <math>n^2</math> and <math>2n^2</math>.
* <math>m=(1+\epsilon)(n+1) \log{q}</math> for an arbitrary constant <math>\epsilon</math>
* <math>\chi=\Psi_{\alpha(n)}</math> for <math>\alpha(n) \in o(1/\sqrt{n}\log{n})</math>
 
The cryptosystem is then defined by:
* ''Private Key'': Private key is an <math>\mathbf{s}\in \mathbb{Z}^n_q</math> chosen uniformly at random.
* ''Public Key'': Choose <math>m</math> vectors <math>a_1,\ldots,a_m \in  \mathbb{Z}^n_q</math> uniformly and independently. Choose error offsets <math>e_1,\ldots,e_m \in \mathbb{T}</math> independently according to <math>\chi</math>. The public key consists of <math>(a_i,b_i=\langle a_i,\mathbf{s} \rangle/q + e_i)^m_{i=1}</math>
* ''Encryption'': The encryption of a bit <math>x \in \{0,1\}</math> is done by choosing a random subset <math>S</math> of <math>[m]</math> and then defining <math>Enc(x)</math> as <math>(\sum_{i \in S} a_i, x/2 + \sum_{i \in S} b_i)</math>
* ''Decryption'': The decryption of <math>(a,b)</math> is <math>0</math> if <math>b-\langle a, \mathbf{s} \rangle/q</math> is closer to <math>0</math> than to <math>\frac{1}{2}</math>, and <math>1</math> otherwise.
 
The proof of correctness follows from choice of parameters and some probability analysis. The proof of security is by reduction to the decision version of '''LWE''': an algorithm for distinguishing between encryptions (with above parameters) of <math>0</math> and <math>1</math> can be used to distinguish between <math>A_{s,\chi}</math> and the uniform distribution over <math>\mathbb{Z}^n_q \times \mathbb{Z}_q</math>
 
=== CCA-secure cryptosystem ===
{{Expand section|date=December 2009}}
Peikert<ref name="peikert09" /> proposed a system that is secure even against any [[chosen-ciphertext attack]].
 
== See also ==
*[[Lattice-based cryptography]]
 
==References==
<references/>
 
[[Category:Machine learning]]
[[Category:Cryptography]]

Revision as of 14:59, 20 March 2013

Learning with errors (LWE) is a problem in machine learning that is conjectured to be hard to solve. It is a generalization of the parity learning problem, introduced[1] by Oded Regev in 2005. Regev showed, furthermore, that the LWE problem is as hard to solve as several worst-case lattice problems. The LWE problem has recently[1][2] been used as a hardness assumption to create public-key cryptosystems.

An algorithm is said to solve the LWE problem if, when given access to samples (x,y) where xqn and yq, with the assurance, for some fixed linear function f:qnq, that y=f(x) with high probability and deviates from it according to some known noise model, the algorithm can recreate f or some close approximation of it with high probability.

Definition

Denote by 𝕋=/ the additive group on reals modulo one. Denote by As,ϕ the distribution on qn×𝕋 obtained by choosing a vector aqn uniformly at random, choosing e according to a probability distribution ϕ on 𝕋 and outputting (a,a,s/q+e) for some fixed vector sqn where the division is done in the field of reals, and the addition in 𝕋.

The learning with errors problem LWEq,ϕ is to find sqn, given access to polynomially many samples of choice from As,ϕ.

For every α>0, denote by Dα the one-dimensional Gaussian with density function Dα(x)=ρα(x)/α where ρα(x)=eπ(|x|/α)2, and let Ψα be the distribution on 𝕋 obtained by considering Dα modulo one. The version of LWE considered in most of the results would be LWEq,Ψα

Decision version

The LWE problem described above is the search version of the problem. In the decision version (DLWE), the goal is to distinguish between noisy inner products and uniformly random samples from qn×𝕋 (practically, some discretized version of it). Regev[1] showed that the decision and search versions are equivalent when q is a prime bounded by some polynomial in n.

Solving decision assuming search

Intuitively, if we have a procedure for the search problem, the decision version can be solved easily: just feed the input samples for the decision problem to the solver for the search problem. Denote the given samples by {(ai,bi)}qn×𝕋. If the solver returns a candidate s, for all i, calculate {ai,sbi}. If the samples are from an LWE distribution, then the results of this calculation will be distributed according χ, but if the samples are uniformly random, these quantities will be distributed uniformly as well.

Solving search assuming decision

For the other direction, given a solver for the decision problem, the search version can be solved as follows: Recover s one coordinate at a time. To obtain the first coordinate, s1, make a guess kZq, and do the following. Choose a number rq uniformly at random. Transform the given samples {(ai,bi)}qn×𝕋 as follows. Calculate {(ai+(r,0,,0),bi+(rk)/q)}. Send the transformed samples to the decision solver.

If the guess k was correct, the transformation takes the distribution As,χ to itself, and otherwise, since q is prime, it takes it to the uniform distribution. So, given a polynomial-time solver for the decision problem that errs with very small probability, since q is bounded by some polynomial in n, it only takes polynomial time to guess every possible value for k and use the solver to see which one is correct.

After obtaining s1, we follow an analogous procedure for each other coordinate sj. Namely, we transform our bi samples the same way, and transform our ai samples by calculating ai+(0,,r,,0), where the r is in the jth coordinate. [1]

Peikert[2] showed that this reduction, with a small modification, works for any q that is a product of distinct, small (polynomial in n) primes. The main idea is if q=q1q2qt, for each q, guess and check to see if sj is congruent to 0modq, and then use the Chinese remainder theorem to recover sj.

Average case hardness

Regev[1] showed the Random self-reducibility of the LWE and DLWE problems for arbitrary q and χ. Given samples {(ai,bi)} from As,χ, it is easy to see that {(ai,bi)+(ai,t)/q} are samples from As+t,χ.

So, suppose there was some set 𝒮qn such that |𝒮|/|qn|=1/poly(n), and for distributions As,χ, with s𝒮, DLWE was easy.

Then there would be some distinguisher 𝒜, who, given samples {(ai,bi)}, could tell whether they were uniformly random or from As,χ. If we need to distinguish uniformly random samples from As,χ, where s is chosen uniformly at random from qn, we could simply try different values t sampled uniformly at random from qn, calculate {(ai,bi)+(ai,t)/q} and feed these samples to 𝒜. Since 𝒮 comprises a large fraction of qn, with high probability, if we choose a polynomial number of values for t, we will find one such that s+t𝒮, and 𝒜 will successfully distinguish the samples.

Thus, no such 𝒮 can exist, meaning LWE and DLWE are (up to a polynomial factor) as hard in the average case as they are in the worst case.

Hardness results

Regev's result

For a n-dimensional lattice L, let smoothing parameter ηϵ(L) denote the smallest s such that ρ1/s(L*{0})ϵ where L* is the dual of L and ρα(x)=eπ(|x|/α)2 is extended to sets by summing over function values at each element in the set. Let DL,r denote the discrete Gaussian distribution on L of width r for a lattice L and real r>0. The probability of each xL is proportional to ρr(x).

The discrete Gaussian sampling problem(DGS) is defined as follows: An instance of DGSϕ is given by an n-dimensional lattice L and a number rϕ(L). The goal is to output a sample from DL,r. Regev shows that there is a reduction from GapSVP100nγ(n) to DGSnγ(n)/λ(L*) for any function γ(n).

Regev then shows that there exists an efficient quantum algorithm for DGS2nηϵ(L)/α given access to an oracle for LWEq,Ψα for integer q and α(0,1) such that αq>2n. This implies the hardness for LWE. Although the proof of this assertion works for any q, for creating a cryptosystem, the q has to be polynomial in n.

Peikert's result

Peikert proves[2] that there is a probabilistic polynomial time reduction from the GapSVPζ,γ problem in the worst case to solving LWEq,Ψα using poly(n) samples for parameters α(0,1), γ(n)n/(αlogn), ζ(n)γ(n) and q(ζ/n)ωlogn).

Use in Cryptography

The LWE problem serves as a versatile problem used in construction of several[1][2][3][4] cryptosystems. In 2005, Regev[1] showed that the decision version of LWE is hard assuming quantum hardness of the lattice problems GapSVPγ (for γ as above) and SIVPt with t=Õ(n/α). In 2009, Peikert[2] proved a similar result assuming only the classical hardness of the related problem GapSVPζ,γ. The disadvantage of Peikert's result is that it bases itself on a non-standard version of an easier (when compared to SIVP) problem GapSVP.

Public-key cryptosystem

Regev[1] proposed a public-key cryptosystem based on the hardness of the LWE problem. The cryptosystem as well as the proof of security and correctness are completely classical. The system is characterized by m,q and a probability distribution χ on 𝕋. The setting of the parameters used in proofs of correctness and security is

The cryptosystem is then defined by:

The proof of correctness follows from choice of parameters and some probability analysis. The proof of security is by reduction to the decision version of LWE: an algorithm for distinguishing between encryptions (with above parameters) of 0 and 1 can be used to distinguish between As,χ and the uniform distribution over qn×q

CCA-secure cryptosystem

Template:Expand section Peikert[2] proposed a system that is secure even against any chosen-ciphertext attack.

See also

References

  1. 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 Oded Regev, “On lattices, learning with errors, random linear codes, and cryptography,” in Proceedings of the thirty-seventh annual ACM symposium on Theory of computing (Baltimore, MD, USA: ACM, 2005), 84-93, http://portal.acm.org/citation.cfm?id=1060590.1060603.
  2. 2.0 2.1 2.2 2.3 2.4 2.5 Chris Peikert, “Public-key cryptosystems from the worst-case shortest vector problem: extended abstract,” in Proceedings of the 41st annual ACM symposium on Theory of computing (Bethesda, MD, USA: ACM, 2009), 333-342, http://portal.acm.org/citation.cfm?id=1536414.1536461.
  3. Chris Peikert and Brent Waters, “Lossy trapdoor functions and their applications,” in Proceedings of the 40th annual ACM symposium on Theory of computing (Victoria, British Columbia, Canada: ACM, 2008), 187-196, http://portal.acm.org/citation.cfm?id=1374406.
  4. Craig Gentry, Chris Peikert, and Vinod Vaikuntanathan, “Trapdoors for hard lattices and new cryptographic constructions,” in Proceedings of the 40th annual ACM symposium on Theory of computing (Victoria, British Columbia, Canada: ACM, 2008), 197-206, http://portal.acm.org/citation.cfm?id=1374407.