Premium
How to get more mileage from randomness extractors
Author(s) -
Shaltiel Ronen
Publication year - 2008
Publication title -
random structures and algorithms
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.314
H-Index - 69
eISSN - 1098-2418
pISSN - 1042-9832
DOI - 10.1002/rsa.20207
Subject(s) - randomness , extractor , mathematics , distribution (mathematics) , transformation (genetics) , discrete mathematics , entropy (arrow of time) , class (philosophy) , combinatorics , uniform distribution (continuous) , algorithm , computer science , mathematical analysis , statistics , physics , biochemistry , chemistry , quantum mechanics , artificial intelligence , process engineering , engineering , gene
Let be a class of distributions over B n . A deterministic randomness extractor for is a function E : B n a r B m such that for any X in the distribution E ( X ) is statistically close to the uniform distribution. A long line of research deals with explicit constructions of such extractors for various classes while trying to maximize m . In this paper we give a general transformation that transforms a deterministic extractor E which extracts “few” bits into an extractor E ′ that extracts “almost all the bits present in the source distribution.” More precisely, we prove a general theorem saying that if E and satisfy certain properties, then we can transform E into an extractor E ′ . Our methods build on (and generalize) a technique of Gabizon et al. [FOCS (2004) 394–403] that presents such a transformation for the very restricted class of “oblivious bit‐fixing sources.” The high level idea is to find properties of E and which allow “recycling” the output of E so that it can be “reused” to operate on the source distribution. An obvious obstacle is that the output of E is correlated with the source distribution. Using our transformation we give an explicit construction of a two‐source extractor E : B n × B n a r B m such that for every two independent distributions X 1 and X 2 over B n with min‐entropy at least k = (1/2 + δ ) n and e p s ≤ 2 ‐ log 4 n , E ( X 1 , X 2 ) is e p s ‐close to the uniform distribution on m = 2 k ‐ C δ log(1/ e p s ) bits. This result is optimal except for the precise constant C δ and improves previous results by Chor and Goldreich [SICOMP 17 (1988) 230–261], Vazirani [Combinatorica 7 (1987) 375–392], and Dodis et al. [RANDOM (2004) 334–344]. We also give explicit constructions of extractors for samplable distributions that extract many bits even out of “low‐entropy” samplable distributions. These constructions rely on average case hardness assumptions and extend some previous results by Trevisan and Vadhan [FOCS (2000) 32–42] which give such extractors only for distributions with “high entropy.” © 2008 Wiley Periodicals, Inc. Random Struct. Alg., 2008