We initiate the study of random-self-reducibility from an information-theoretic point of view. Specifically, we formally define the notion of a random-self-reduction that, with respect to a given ensemble of distributions, leaks a limited number bits, i.e., produces target instances y_1...,y_#kappa# in such a manner that each y_i has limited mutual information with the input x. We argue that this notion is useful in studying the relationships between random-self-reducibility and other properties of interest, including self-correctability and NP-hardness. In the case of self-correctability, we show that the information-theoretic definition of random-self-reducibility leads to somewhat different conclusions from those drawn by Feigenbaum, Fortnow, Laplante, and Naik [13], who used the standard definition. In the case of NP-hardness, we use the information-theoretic definition to strengthen the result of Feigenbaum and Fortnow [12], who proved, using the standard definition, that the polynomial hierarchy collapses if an NP-hard set is random-self-reducible.
展开▼