Key-Dependent Message Security: Generic Amplification and Completenes

From Bauman National Library
Revision as of 01:20, 24 December 2015 by yevgeny pasternak (Talk | contribs) (Inroduction)

AbatractKey-dependent message (KDM) secure encryption schemes provide secrecy even when the attacker sees encryptions of messages related to the secret-key sk. Namely, the scheme should remain secure even when messages of the form f (sk) are encrypted, where f is taken from some function class F . A KDM amplification procedure takes an encryption scheme which satisfies F -KDM security and boost it into a G -KDM secure scheme, where the function class G should be richer than F . It was recently shown by Brakerski et al. (TCC 2011) and Barak et al. (EUROCRYPT 2010), that a strong form of amplification is possible, provided that the underlying encryption scheme satisfies some special additional properties.

In this work, we prove the first generic KDM amplification theorem which relies solely on the KDM security of the underlying scheme without making any other assumptions. Specifically, we show that an elementary form of KDM security against functions in which each output bit either copies or flips a single bit of the key (aka projections ) can be amplified into KDM security with respect to any function family that can be computed in arbitrary fixed polynomial-time. Furthermore, our amplification theorem and its proof are insensitive to the exact setting of KDM security, and they hold in the presence of multiple-keys and in the symmetric-key/public-key and the CPA/CCA cases. As a result, we can amplify the security of all known KDM constructions, including ones that could not be amplified before.

Finally, we study the minimal conditions under which full-KDM security (with respect to all functions) can be achieved. We show that under strong notion of KDM security, the existence of cyclic-secure fullyhomomorphic encryption is not only sufficient for full-KDM security, as shown by Barak et al., but also necessary. On the other hand, we observe that for standard KDM security, this condition can be relaxed by adopting Gentry’s bootstrapping technique (STOC 2009) to the KDM setting.


The study of secure encryption scheme is perhaps the most central subject in cryptography. Since the discovery of semantic security [24] till the formulation of CCA-security [31, 33, 18], modern cryptography has successfully developed increasingly stronger notions of security providing secrecy in highly adversarial settings. Still, all these strong notions of security guarantee secrecy only as long as the encrypted messages are independent of the secret key. This limitation dates back to the seminal work of Goldwasser and Micali [24] who observed that semantic security may not hold if the adversary gets to see an encryption of the secret key. For many years, such usage scenarios were considered as “security bugs” that should be prevented by system designers.

A decade ago, the assumption of independency between the secret key and the encrypted data was challenged by Camenisch and Lysyanskaya [16] and Black et al. [11]. Specifically, Camenisch and Lysyanskaya considered schemes that remain secure under a “key cycle” usage, where we have t keys organized in a cycle and each key is encrypted under its left neighbor. A generalization of this notion, called key-dependent message (KDM) security, was suggested by Black et al. Informally, an encryption is KDM(t) secure with respect to a function class F if security holds even when the adversary can ask for an encryption of the message M = f (sk1 , . . . , skt) under the i-th public-key, where sk1 , . . . , skt are the secret keys present in the system and f is an arbitrary function in F . This notion of security implies cyclic-security if F is expressive enough (e.g., contains all “selector” functions), and it becomes strictly stronger when the function class F grows. Hence, one would like to achieve KDM security while making the function class F as large as possible.

The notion of KDM security was extensively studied in the past few years in several flavors including the symmetric/public-key and the CPA/CCA settings [16, 11, 26, 9, 12, 15, 8, 27, 25, 5, 14, 2, 10, 13]. These works were motivated by the fundamental nature of the question as well as by concrete applications including encrypted storage systems (e.g., BitLocker [12]), anonymous credentials [16], and realization of security proofs at the framework of axiomatic security [1, 11,3]. (See [12] for more motivations and details.)

Although much is known today about KDM security both on the positive and negative sides, it is still unclear whether a standard encryption scheme can be transformed into a scheme which provides KDM(t) security, even with respect to a single key (i.e., t = 1) and simple non-trivial function families (e.g., selectors).1 Hence, it is natural to move forward and explore the possibility of building strong KDM security given a weak form of KDM security as a primitive. This makes sense as today, following the seminal work of Boneh et al. [12] and its followups [15, 5, 13], it is known that a basic form of KDM security (with respect to the family of “affine functions”) can be achieved in several settings under various concrete cryptographic assumptions. Therefore, following [14] we ask:

Is there a generic transformation which amplifies KDM security from a weak family of functions F to a larger family of functions G ?

The two main features of such a procedure are generality – the transformation should work with any scheme which satisfies F -KDM security without relying on any other additional property – and large amplification gap – ideally, F is a very simple function class whereas G is as rich as possible. The question of KDM amplification was recently addressed by Brakerski et al. [14] and Barak et al. [10], who made an important progress by showing how to amplify the KDM security of several existing schemes. While these works achieve relatively large amplification gap, they fall short of providing full generality as they strongly rely on additional properties of the underlying scheme (i.e., simulatable -KDM security and entropic -KDM security – to be defined later). As a concrete example, it is unknown how to use any of these techniques to amplify the KDM-security of the symmetric-key encryption scheme of [5] which is based on the Learning Parity With Noise (LPN) assumption. (See Section 1.3 for more details about these works and their relation to our approach.)

Our Results

We give an affirmative answer to the above question by providing the first generic KDM amplification procedure. In particular, we consider the projection function class of all functions f : (sk1 , . . . , skt) → v in which each output bit depends on (at most) a single bit of the input. Namely, each output bit vj is either fixed to a constant or copies/flips an original bit of one of the keys. We show that this elementary function family is complete in the following sense:

Theorem 1 (Completeness of projections, Informal). Let G be any function family which can be computed in some fixed polynomial time. Then, any encryption scheme which satisfies KDM(t) security with respect to projections can be transformed into a new encryption scheme which is KDM(t)-secure with respect to G .

Generality. Theorem 1 assumes nothing but KDM security regarding the underlying scheme. Furthermore, the theorem (and its surprisingly simple proof) is insensitive to the exact setting of KDM security: it holds for any number of keys (t), and in both symmetric-key/public-key and CPA/CCA settings. In all these cases, the new scheme is proven to be secure exactly in the same setting as the original scheme. This allows us, for example, to amplify the security of the affine-KDM secure scheme of [5], and obtain the first symmetric-key encryption scheme with strong KDM security based on the LPN assumption.

Large gap. Theorem 1 provides a large amplification gap. In fact, this gap can be further expanded as follows. First, we can achieve length-dependent KDM security [10], which means that the target family G can be taken to be the family of all polynomial-size circuits whose size grows with their input and output lengths via a fixed polynomial rate (e.g., the circuit size is quadratic in the input and output lengths). This family is very powerful and it was shown to be rich enough for most known applications of KDM security [10].2 (See Section 3 for details.) In addition, in the case of CPA security (both in the public-key and symmetric-key settings), we can weaken the requirement from the underlying scheme and ask for KDM security with respect to projections with a single output : namely, all Boolean functions f (sk1 , . . . , skt) → b which output a single bit of one of the keys, or its negation. This can be extended to the CCA setting via the transformations of [9, 15] (though in the public-key setting one has to employ, in addition, non-interactive zero-knowledge proofs).

The relaxation to single-output projections also enables a liberal interface to which we can easily plug previous constructions. For example, one can instantiate our reduction with schemes that enjoy KDM security with respect to affine functions, while almost ignoring technical details such as the underlying field and its representation. (These details required some effort in previous works. See the appendices in [14, 10, 13].) This, together with the simple proof of our main theorem, allows to simplify the proofs of [10, 13] for the existence of length-dependent KDM secure encryption scheme under the Decisional Diffie-Hellman (DDH) assumption [12], the Learning With Errors assumptions (LWE) [5], and the Quadratic Residuosity (QR) assumption [13]. Given this completeness theorem, the current status of KDM security resembles the status of other “complete” primitives in cryptography such as one-way functions or oblivious transfer [32, 19]: We do not know how to build these primitives from generic weaker assumptions, however, any instantiation of them suffices for an entire world of applications (i.e., all symmetric-key primitives in the case of one-way functions, and generic secure-computation in the case of oblivious transfer, cf. [22, 23]).

Beyond length-dependent security. Although length-dependent KDM security seems to suffice for most applications, one can strive for an even stronger notion of security in which the KDM function class contains all functions (or equivalently all functions computable by circuits of arbitrary polynomial size). It is somewhat likely that any length-dependent secure scheme actually achieves full-KDM security (see the discussion in [10]). Still, one may want to construct such a scheme in a provably secure way. As a basic feasibility result, it was shown in [10] that any fully homomorphic encryption scheme [20] which allows to encrypt the secret-key (i.e., “cyclic-secure”) is also full-KDM secure. In light of the small number of FHE candidates [20, 17], and our little understanding of this notion, one may ask whether it is possible to relax this requirement and achieve full-KDM security under weaker assumptions.

We make two simple observations regarding this question. First, we consider the case of simulatable KDM security [10], in which it should be possible to simulate an encryption of f (sk) given only the corresponding public-key in a way that remains indistinguishable even to someone who knows the secret-key. We show that in this setting the two notions: circular-secure FHE and full-KDM are equivalent. Hence, achieving full-KDM security under a relaxed assumption requires to use non-simulatable constructions.

Our second observation asserts that the bootstrapping technique of Gentry [20] can be used in the KDM setting as well (even for the case of nonsimulatable constructions). That is, if one can construct an encryption scheme which guarantees KDM security with respect to circuits whose depth is only slightly larger than the depth of the decryption algorithm, then this scheme is actually fully KDM secure. Unfortunately, all known amplification techniques [10,14] including the ones in this paper, amplify KDM security at the cost of making the decryption algorithm “deeper”. Still, we view this observation as an interesting direction for future research.

Our Techniques

To formalize the question of KDM amplification, we define the notion of reduction between KDM function families G ≤KDM F which means that any scheme that provides KDM security with respect to F can be transformed (via a fully black-box reduction) to a new scheme that satisfies KDM security with respect to G . We describe a novel way to derive such KDM reductions based on the machinery of randomized encoding of functions [29, 7]. Before we explain this notion, let us start with the simpler case of deterministic encoding.

Say that a function f deterministically encodes a function g if for every x the output of f (x) “encodes” the output of g(x) in the sense that g(x) can be efficiently computed based on f (x) and vice versa. That is, there are two efficiently computable mappings S and R such that S (g(x)) = f (x), and R(f (x)) = g(x). Suppose that we are given a scheme which provides KDM security with respect to the encoding f , and we would like to immunize it against the function g. This can be easily achieved by modifying the encryption scheme as follows: to encrypt a message M we first translate it into the f -encoding by computing S (M ), and then encrypt the result under the original encryption scheme. Decryption is done by applying the original decryption algorithm, and then applying the recovery algorithm R to translate the result back to its original form. Observe that an encryption of g(sk) in the new scheme is the same as an encryption of S (g(sk)) = f (sk) under the original scheme. Hence, the KDM security of the new scheme with respect to g reduces to the KDM security of the original scheme with respect to f.

This simple idea provides a direct reduction with very nice structure: any KDM query for the new scheme is translated into a single KDM query for the original scheme. This simple single-query-to-single-query translation leads to high level of generality: the transformation is insensitive to the exact KDM setting (symmetric-key/public-key and CPA/CCA), to the number of keys, and it can be used with respect to large function families G and F as long as every function in G is encoded by some function in F via a pair of universal mappings

S and R. On the down side, one may complain that security was not really amplified, as the function g and its encoding f are essentially equivalent. It turns out that this drawback can be easily fixed by letting f be a randomized encoding of g.

In the case of randomized encoding (RE), the function f (x; r) depends not only on x but also on an additional random input r. For every fixed x, the output of f (x; r) is now viewed as a distribution (induced by a random choice of r) which should encode the value of g(x). Namely, there are two efficiently computable randomized mappings S and R such that for every x: (1) the distribution S (g(x)) is indistinguishable from f (x; r), and (2) with high probability over the choice of r (or even with probability one) R(f (x; r)) = g(x). One can view these conditions as saying that g(x) is encoded by a collection of functions {fr (x)}r , where fr (x) = f (x; r).

Now suppose that our scheme is KDM secure with respect to the family {fr (x)}r , then we can apply the above approach and get a scheme which satisfies KDM security with respect to g. The only difference is that now the message preprocessing step is randomized: To encrypt a message M first encode it by the randomized mapping S (M ), and then use the original encryption function. The security reduction is essentially the same except that a KDM query for g in the new scheme is emulated by an old KDM query for a randomly chosen function fr . This idea can be easily extended to the case where all functions in G are encoded by functions in F :

Theorem 2 (Informal). If F is an RE of G , then G ≤KDM F .

The crux of this theorem, is that, unlike deterministic encoding, randomized encoding can represent complicated functions by collections of very simple functions [29, 30, 7, 6]. Specifically, by combining the above theorem with the REs of [6], which, in turn, are based on Yao’s garbled circuit [34], we obtain our main results (Thm. 1).

Comparison with BGK and BHHI

Our techniques are inspired by both [14] (BGK) and [10] (BHHI). We believe that our approach inherits the positive features of each of these works, and sheds new light on the way they relate to each other. Let us review the main ideas behind these constructions and explain how they compare to our solution.

The BGK reduction The starting point in [14] is an encryption scheme which satisfies entropic KDM security with respect to F . Roughly speaking, this means that KDM security should hold not only when sk is chosen uniformly from the key space K = {0, 1}k but also when it is chosen uniformly from a smaller ε every efficiently computable injective mapping α : K′ → K, one can amplify security from F to the class F ◦ α, i.e., with respect to functions f (α(sk)) for every f ∈ F . The idea is to choose the key sk′ from K′ and employ the original scheme with the key sk = α(sk′ ). This allows to translate a KDM query f (α(sk′)) for the new scheme into an entropic-KDM query f (sk) for the old scheme.

The deterministic encoding (DE) approach is inspired by the BGK approach, and can be seen as a complementary solution. BGK extends a function f : K →M to f ◦ α : K′ → M by shrinking the key space (from K to K′ ), whereas in the DE approach f : K → M is extended to R ◦ f : K → M′ by padding messages which effectively shrinks the message space (from M to M′ = R(M)).

As a result BGK enjoys a similar attractive security reduction with single-query-to-single-query translation. This leads to flexibility with respect to the KDM setting. Indeed, although the BGK approach is not fully general due to its use of entropic-KDM security (a notion which seems stronger than standard KDM security), it immediately generalizes to the CCA and the symmetric-key settings, as long as the underlying scheme provides entropic-KDM security. It should be mentioned that in our approach the amplification is achieved by modifying the encryption algorithm, rather than the key-generation algorithm as in BGK. This minor difference turns to have a considerable effect on the amplification-gap. First, it allows to use fresh randomness in every application of the encryption algorithm, and so the linkage between functions in G to functions in F can be randomized. Indeed, this is exactly what allows us to exploit the power of randomized encoding. In contrast, the BGK approach tweaks the key-generation algorithm and so the relation between G to F is bounded to be deterministic. In addition, since our modification happens in the encryption (and decryption) phases, we can let the function class G grow not only with the security parameter but also with the size of the messages. This leads to the strong notion of length-dependent security, and in addition allows to achieve KDM(t) where the number of keys t grows both with the message length and the security parameter.

In contrast, the family G of BGK cannot grow with the message length, and it can only contain a polynomial number of functions. This limitation prevents it from being used in applications which require KDM security wrt larger functions classes (e.g., secure realization of symbolic protocols with axiomatic proofs of security). Furthermore, amplification for large number of keys can be achieved only at the expense of putting more restrictions on the underlying scheme (i.e.,simulatable KDM security). On the other hand, assuming these additional properties, the BGK approach can get KDM(t) for arbitrary unbounded t with respect to some concrete function families (e.g., constant degree polynomials), whereas in our approach t is always bounded by some fixed polynomial (in the security parameter and message length).3 Finally, it is important to mention that the BGK reduction treats G in a black-box way, while the randomized encoding approach treats this class in a non-black-box way.

The BHHI reduction The BHHI approach relies on a novel connection between homomorphic encryptions and KDM security. First, it is observed that in order to obtain KDM security with respect to G it suffices to construct a scheme which provides both cyclic-security (i.e., KDM security with respect to the identity function) and homomorphism with respect to a function family G , i.e., it should be possible to convert a ciphertext C = Epk(M ) into C ′ = Epk(g(M )) for every g ∈ G . Indeed, the homomorphism property can be used to convert a ciphertext Epk(sk) into the ciphertext Epk(g(sk)), and so cyclic-security is amplified to G -KDM security.

BHHI construct such an encryption scheme by combining a two-party secure computation protocol with two messages (i.e., based on Yao’s garbled circuit [34]) with a strong version of oblivious transfer which satisfies an additional cyclic-security property. The latter primitive is referred to as targeted encryption (TE). The basic idea is to view the homomorphic property as a secure-computation task in which the first party holds the message M and the second party holds the function g. The cyclic nature of the TE primitive allows to implement this homomorphism even when the input M is the secret-key. Finally, BHHI show that TE can be constructed based on affine-KDM secure encryption scheme which satisfies a strong notion of simulation: There exists a simulator which given the public-key pk can simulate a ciphertext Epk (g(sk)) in a way which is indistinguishable even for someone who holds the secret-key.

The BHHI construction seems conceptually different from our RE approach (i.e., homomorphism vs. encoding). Moreover, the construction itself is not only syntactically different, but also relies on different building blocks (e.g., TE). Still, the RE construction shares an important idea with BHHI: The use of secure-computation techniques. It is well known that REs are closely related to secure multiparty-computation (MPC) protocols, and, indeed, the role of REs in our reduction resembles the role of MPC in BHHI. In both solutions at some point the reduction applies the RE/MPC to the function g in G . Furthermore, both works achieve strong KDM security by instantiating the RE/MPC with Yao’s garbled circuit (GC) — a tool which leads to both stand-alone RE construction [6] and, when equipped with an OT, to a two-party secure-computation protocol.

It should be emphasized, however, that the actual constructions differ in some important aspects. While we essentially encrypt the whole GC-based encoding under the underlying KDM encryption scheme, BHHI tweak the GC protocol with a cyclic-secure OT (i.e., TE). Pictorially, our underlying KDM-secure scheme “wraps” the GC encoding, whereas in BHHI the KDM-secure primitive is “planted inside” the GC protocol. This difference affects both generality and simplicity as follows.

First, BHHI are forced to implement a KDM-secure OT, a primitive which seems much stronger than standard KDM secure encryption schemes. For example, KDM-secure symmetric-key encryption schemes can be constructed at the presence of a random oracle [11] while OT protocols cannot [28].4 Moreover, as we already mentioned, although TE can be based on several known affine-secure KDM schemes (i.e., ones which enable strong simulation), the LPN assumption (with constant error-rate) is a concrete example under which symmetric-key encryption scheme with KDM-security wrt affine functions exist, yet OT is not known to exist. Furthermore, since BHHI send the garbled circuit in the clear, it is not hard to show that the resulting scheme is not CCA-secure even if the TE provides CCA security. Finally, the modification of the GC protocol leads to a relatively complicated security proof.


For a positive integer n ∈ N, let [n] denote the set {1, . . . , n}, and Un denote the uniform distribution over {0, 1}n. A function ε(n) is negligible if it tends to zero faster than 1/nc for every constant c > 0. The term efficient refers to probabilistic machines that run in polynomial time in the security parameter.

Efficient functions and randomized functions. A randomized function f : {0, 1}∗ ×{0, 1}∗ → {0, 1}∗ is a function whose second input is treated as a random input. We write f (x; r) to denote the evaluation of f on deterministic input x and random input r, and typically assume length regularity and efficient evaluation as follows: there are efficiently computable polynomials m(n) and ℓ(n) and}an efficiently computable circuit family fn : {0, 1}n × {0, 1}m(n) → {0, 1}ℓ(n) which computes the restriction of f to n-bit deterministic inputs. If the function is not length regular, we assume that the circuit family is indexed by a pair of input and output parameters (n, ℓ), and require evaluation in time poly(n, ℓ). Finally, a deterministic function corresponds to the special case where m(n) = 0.

Function ensembles. A function ensemble is a collection of functions {fz }z∈Z indexed by an index set Z ⊆ {0, 1}∗ , where for each z the function fz has a finite domain {0, 1}n(z) and a finite range {0, 1}ℓ(z), where n, ℓ : {0, 1}∗ → N. By default, we assume that ensembles are efficiently computable, that is, the functions n(z), ℓ(z), as well as the function F (z, x) = fz (x) are computable in time poly(|z|). Hence n(z), ℓ(z) < poly(|z|). We also assume that |z| <poly(n(z), ℓ(z)).

Randomized encoding of functions. Intuitively, a randomized encoding of a function g(x) is a randomized mapping f (x; r) whose output distribution depends only on the output of g. We formalize this intuition via the notion of computationally private randomized encoding of [6], while adopting the original definition from a non-uniform adversarial setting to the uniform setting (i.e., adversaries are modeled by probabilistic polynomial-time Turing machines). Consider a function g = gn : {0, 1}n → {0, 1}ℓ(n) and a randomized function

f=fn : {0, 1}n × {0, 1}m(n) → {0, 1}s(n) , which are both efficiently computable. We say that f encodes g, if there exist an efficient recovery algorithms Rec and an efficient simulator Sim that satisfy the following:

perfect correctness. For every x ∈ {0, 1}n, the error probabilities Pr[Rec(1n, f (x, Um(n) )) ̸= g(x)] and Pr[Rec(1n, Sim(1n, g(x))) ̸= g(x)] are both zero.5

computational privacy. For every efficient adversary A we have that Pr[Af (·;U)(1n) = 1] − Pr[ASim(g(·)) (1n ) = 1] < neg(n),

where the oracles are defined as follows: Given x the first oracle returns a sample from f (x; Um(|x|)) and the second oracle returns a sample from Sim(1|x|, g(x)).

This notion is naturally extended to functions gn,ℓ which are not length-regular and are indexed by both input and output lengths. However, we always assume that privacy is parameterized only with the input length (i.e., the adversary’s running-time/distinguishing-probability should be polynomial/negligible in the input length.) Note that, without loss of generality, we can assume that the relevant output length ℓ is always known to the decoder and simulator (i.e., it can be always encoded as part of the output of fn,ℓ).

Encryption schemes (syntax). An encryption scheme consists of three efficient algorithms (KG, E, D), where KG is a key generation algorithm which given a security parameter 1k outputs a pair (sk, pk) of decryption and encryption keys; E is an encryption algorithm that takes a message M ∈ {0, 1}∗ and an encryption key pk and outputs a ciphertext C ; and D is a decryption algorithm that takes a ciphertext C and a decryption key sk and outputs a plaintext M ′. We also assume that both algorithms take the security parameter 1k as an additional input, but typically omit this dependency for simplicity. Correctness requires that the decryption error

max Pr [Dsk (Epk(M )) ̸= M ], R

should be negligible in k, where the probability is taken over the randomness of KG, E and D. For security parameter k, let Kk denote the space from which decryption keys are chosen. Without loss of generality, we always assume that Kk = {0, 1}k . Following Goldreich [23], we note that the above definition corresponds to both public-key and symmetric-key encryption schemes where the latter corre- spond to the special case in which the decryption key sk and encryption key pk are equal. As we will see, the difference between the two settings will be part of the security definitions.