Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD AND APPARATUS FOR UTILITY-AWARE PRIVACY PRESERVING MAPPING AGAINST INFERENCE ATTACKS
Document Type and Number:
WIPO Patent Application WO/2015/026384
Kind Code:
A1
Abstract:
The present principles focus on the privacy-utility tradeoff encountered by a user who wishes to release some public data (denoted by X) to an analyst, that is correlated with his private data (denoted by S), in the hope of getting some utility. The public data is distorted before its release according to a probabilistic privacy preserving mapping mechanism, which limits information leakage under utility constraints. In particular, this probabilistic privacy mechanism is modeled as a conditional distribution, P_(Y|X), where Y is the actual released data to the analyst. The present principles design utility-aware privacy preserving mapping mechanisms against inference attacks, when only partial, or no, statistical knowledge of the prior distribution, P_(S,X), is available. Specifically, using maximal correlation techniques, the present principles provide a separability result on the information leakage that leads to the design of the privacy preserving mapping.

Inventors:
FAWAZ NADIA (US)
KAKHAKI ABBASALI MAKHDOUMI (US)
Application Number:
PCT/US2013/071284
Publication Date:
February 26, 2015
Filing Date:
November 21, 2013
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
THOMSON LICENSING (FR)
FAWAZ NADIA (US)
KAKHAKI ABBASALI MAKHDOUMI (US)
International Classes:
G06F21/62
Foreign References:
US20100024042A12010-01-28
EP2241986A12010-10-20
Other References:
FLAVIO DU PIN CALMON ET AL: "Privacy against statistical inference", COMMUNICATION, CONTROL, AND COMPUTING (ALLERTON), 2012 50TH ANNUAL ALLERTON CONFERENCE ON, IEEE, 1 October 2012 (2012-10-01), pages 1401 - 1408, XP032345161, ISBN: 978-1-4673-4537-8, DOI: 10.1109/ALLERTON.2012.6483382
ELZA ERKIP ET AL: "The Efficiency of Investment Information", IEEE TRANSACTIONS ON INFORMATION THEORY, IEEE PRESS, USA, vol. 44, no. 3, 3 May 1998 (1998-05-03), XP011027054, ISSN: 0018-9448
"Privacy-Preserving Data Mining", vol. 34, 31 December 2008, SPRINGER US, Boston, MA, ISBN: 978-0-38-770992-5, ISSN: 1386-2944, article JOSEP DOMINGO-FERRER: "A Survey of Inference Control Methods for Privacy-Preserving Data Mining", pages: 53 - 80, XP055102636, DOI: 10.1007/978-0-387-70992-5_3
I.S. REED: "Information theory and privacy in data banks", PROCEEDINGS OF THE JUNE 4-8, 1973, NATIONAL COMPUTER CONFERENCE AND EXPOSITION ON, AFIPS '73, 4 June 1973 (1973-06-04), New York, New York, USA, pages 581, XP055090933, DOI: 10.1145/1499586.1499731
D. KIFER; A. MACHANAVAJJHALA: "A rigorous and customizable framework for privacy", ACM PODS, 2012
V. ANANTHARAM; A. GOHARI; S. KAMATH; C. NAIR, ON MAXIMAL CORRELATION, HYPERCONTRACTIVITY, AND THE DATA PROCESSING INEQUALITY STUDIED BY ERKIP AND COVER, 2013
H. O. HIRSCHFELD: "A connection between correlation and contingency", PROCEEDINGS OF THE CAMBRIDGE PHILOSOPHICAL SOCIETY, vol. 31
H. GEBELEIN: "Das statistische Problem der Korrelation als Variations- und Eigenwert-problem und sein Zusammenhang mit der Ausgleichungsrechnung", ZEITSCHRIFT FUR ANGEW. MATH. UND MECH., vol. 21, 1941, pages 364 - 379
A. RENYI: "On measures of dependence", ACTA MATHEMATICA HUNGARICA, vol. 10, no. 3
S. KAMATH; V. ANANTHARAM: "Non-interactive simulation of joint distributions: The hirschfeld-gebelein-rényi maximal correlation and the hypercontractivity ribbon", COMMUNICATION, CONTROL, AND COMPUTING (ALLERTON, 2012
R. AHLSWEDE; P. GACS: "Spreading of sets in product spaces and hypercontraction of the markov operator", THE ANNALS OF PROBABILITY
H. S. WITSENHAUSEN: "On sequences of pairs of dependent random variables", SIAM JOURNAL ON APPLIED MATHEMATICS, vol. 28, no. 1
Attorney, Agent or Firm:
SHEDD, Robert, D. et al. (2 Independence Way Suite 200Princeton, New Jersey, US)
Download PDF:
Claims:
CLAIMS:

1 . A method for processing user data for a user, comprising the steps of: accessing the user data, which includes private data and public data, the private data corresponding to a first category of data, and the public data

corresponding to a second category of data;

decoupling (310, 420) dependencies between the first category of data and the second category of data, from dependencies between the second category of data and released data;

determining (120, 320, 430) a privacy preserving mapping that maps the second category of data to the released data responsive the dependencies between the second category of data and the released data;

modifying (130, 330, 440) the public data for the user based on the privacy preserving mapping; and

releasing (140, 340, 450) the modified data to at least one of a service provider and a data collecting agency.

2. The method of claim 1 , wherein the public data comprises data that the user has indicated can be publicly released, and the private data comprises data that the user has indicated is not to be publicly released.

3. The method of claim 1 , further comprising the step of:

determining the dependencies between the first category of data and the second category of data responsive to mutual information between the first category of data and the second category of data.

4. The method of claim 1 , wherein the steps of decoupling and determining a privacy preserving mapping are based on maximal correlation techniques.

5. The method of claim 1 , further comprising the step of:

accessing a constraint on utility, the utility being responsive to the second category of data and the released data, wherein the step of determining a privacy preserving mapping is further responsive to the utility constraint.

6. The method of claim 1 , wherein the step of determining a privacy preserving mapping comprises:

minimizing (320, 430) the maximum information leakage between the first category of data and the released data.

7. The method of claim 1 , further comprising the step of:

accessing (1 10, 410) statistical information based on the second category of data from other users, wherein the statistical information is used to determine the privacy preserving mapping.

8. The method of claim 7, wherein the step of determining comprises determining independently of a joint distribution between the first category of data and the second category of data.

9. The method of claim 7, wherein the step of determining comprises determining independently of a marginal distribution of the second category of data.

10. The method of claim 1 , further comprising the step of receiving service based on the released distorted data.

1 1 . An apparatus (500) for processing user data for a user, comprising:

a processor configured to access the user data, which includes private data and public data, the private data corresponding to a first category of data, and the public data corresponding to a second category of data

a privacy preserving mapping decision module (530) coupled to the processor and configured to

decouple dependencies between the first category of data and the second category of data, from dependencies between the second category of data and released data, and

determine a privacy preserving mapping that maps the second category of data to the released data responsive the dependencies between the second category of data and released data;

a privacy preserving module (540) configured to

modify the public data for the user based on the privacy preserving mapping, and

release the modified data to at least one of a service provider and a data collecting agency.

12. The apparatus of claim 1 1 , wherein the public data comprises data that the user has indicated can be publicly released, and the private data comprises data that the user has indicated is not to be publicly released.

13. The apparatus of claim 1 1 , wherein the privacy preserving mapping decision module (530) determines the dependencies between the first category of data and the second category of data responsive to mutual information between the first category of data and the second category of data.

14. The apparatus of claim 1 1 , wherein the privacy preserving mapping decision module (530) decouple dependencies and determines a privacy preserving mapping based on maximal correlation techniques.

15. The apparatus of claim 1 1 , wherein the privacy preserving mapping decision module (530) accesses a constraint on utility, the utility being responsive to the second category of data and the released data, and determines the privacy preserving mapping responsive to the utility constraint.

16. The apparatus of claim 1 1 , wherein the privacy preserving mapping decision module (530) minimizes the maximum information leakage between the first category of data and the released data.

17. The apparatus of claim 1 1 , wherein the privacy preserving mapping decision module (530) accesses statistical information based on the second category of data from other users, wherein the statistical information is used to determine the privacy preserving mapping.

18. The apparatus of claim 17, wherein the privacy preserving mapping decision module determines the privacy preserving mapping independently of a joint distribution between the first category of data and the second category of data.

19. The method of claim 17, wherein the privacy preserving mapping decision module determines the privacy preserving mapping independently of a marginal distribution of the second category of data.

20. The apparatus of claim 1 1 , further comprising a processor configured to receive service based on the released distorted data.

21 . A computer readable storage medium having stored thereon instructions for improving privacy of user data for a user, according to claims 1 -10.

Description:
Method and Apparatus for Utility-Aware Privacy Preserving Mapping against Inference Attacks

CROSS-REFERENCE TO RELATED APPLICATIONS This application claims the benefit of the filing date of the following U.S.

Provisional Application, which is hereby incorporated by reference in its entirety for all purposes: Serial No. 61/ 867,543, filed on August 19, 2013, and titled "Method and Apparatus for Utility-Aware Privacy Preserving Mapping against Inference Attacks." This application is related to U.S. Provisional Patent Application Serial No.

61/691 ,090 filed on August 20, 2012, and titled "A Framework for Privacy against Statistical Inference" (hereinafter "Fawaz"). The provisional application is expressly incorporated by reference herein in its entirety.

In addition, this application is related to the following applications: (1 ) Attorney Docket No. PU130121 , entitled "Method and Apparatus for Utility-Aware Privacy Preserving Mapping in View of Collusion and Composition," and (2) Attorney Docket No. PU130122, entitled "Method and Apparatus for Utility-Aware Privacy Preserving Mapping Through Additive Noise," which are commonly assigned, incorporated by reference in their entireties, and concurrently filed herewith. TECHNICAL FIELD

This invention relates to a method and an apparatus for preserving privacy, and more particularly, to a method and an apparatus for generating a privacy preserving mapping mechanism without the full knowledge of the joint distribution of the private data and public data to be released.

BACKGROUND

In the era of Big Data, the collection and mining of user data has become a fast growing and common practice by a large number of private and public institutions. For example, technology companies exploit user data to offer personalized services to their customers, government agencies rely on data to address a variety of challenges, e.g., national security, national health, budget and fund allocation, or medical institutions analyze data to discover the origins and potential cures to diseases. In some cases, the collection, the analysis, or the sharing of a user's data with third parties is performed without the user's consent or awareness. In other cases, data is released voluntarily by a user to a specific analyst, in order to get a service in return, e.g., product ratings released to get recommendations. This service, or other benefit that the user derives from allowing access to the user's data may be referred to as utility. In either case, privacy risks arise as some of the collected data may be deemed sensitive by the user, e.g., political opinion, health status, income level, or may seem harmless at first sight, e.g., product ratings, yet lead to the inference of more sensitive data with which it is correlated. The latter threat refers to an inference attack, a technique of inferring private data by exploiting its correlation with publicly released data.

SUMMARY

The present principles provide a method for processing user data for a user, comprising the steps of: accessing the user data, which includes private data and public data, the private data corresponding to a first category of data, and the public data corresponding to a second category of data; decoupling dependencies between the first category of data and the second category of data, from dependencies between the second category of data and released data; determining a privacy preserving mapping that maps the second category of data to the released data responsive the dependencies between the second category of data and the released data; modifying the public data for the user based on the privacy preserving mapping; and releasing the modified data to at least one of a service provider and a data collecting agency as described below. The present principles also provide an apparatus for performing these steps.

The present principles also provide a method for processing user data for a user, comprising the steps of: accessing the user data, which includes private data and public data, the private data corresponding to a first category of data, and the public data corresponding to a second category of data; determining dependencies between the first category of data and the second category of data responsive to mutual information between the first category of data and the second category of data; decoupling the dependencies between the first category of data and the second category of data, from dependencies between the second category of data and released data; determining a privacy preserving mapping that maps the second category of data to the released data responsive the dependencies between the second category of data and the released data based on maximal correlation techniques; modifying the public data for the user based on the privacy preserving mapping; and releasing the modified data to at least one of a service provider and a data collecting agency as described below. The present principles also provide an apparatus for performing these steps.

The present principles also provide a computer readable storage medium having stored thereon instructions for processing user data for a user according to the methods described above.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 is a flow diagram depicting an exemplary method for preserving privacy, in accordance with an embodiment of the present principles.

FIG. 2 is a flow diagram depicting an exemplary method for preserving privacy when the joint distribution between the private data and public data is known, in accordance with an embodiment of the present principles.

FIG. 3 is a flow diagram depicting an exemplary method for preserving privacy when the joint distribution between the private data and public data is unknown and the marginal probability measure of the public data is also unknown, in accordance with an embodiment of the present principles.

FIG. 4 is a flow diagram depicting an exemplary method for preserving privacy when the joint distribution between the private data and public data is unknown but the marginal probability measure of the public data is known, in accordance with an embodiment of the present principles. FIG. 5 is a block diagram depicting an exemplary privacy agent, in

accordance with an embodiment of the present principles. FIG. 6 is a block diagram depicting an exemplary system that has multiple privacy agents, in accordance with an embodiment of the present principles.

FIG. 7 is a pictorial example illustrating different privacy metrics, in

accordance with an embodiment of the present principles. DETAILED DESCRIPTION

In the database and cryptography literatures from which differential privacy arose, the focus has been algorithmic. In particular, researchers have used differential privacy to design privacy preserving mechanisms for inference algorithms, transporting, and querying data. More recent works focused on the relation of differential privacy with statistical inference. It is shown that differential privacy does not guarantee a limited information leakage. Other frameworks similar to differential privacy exist such as the Pufferfish framework, which can be found in an article by D. Kifer and A. Machanavajjhala, "A rigorous and customizable framework for privacy," in ACM PODS, 2012, which however does not focus on utility preservation.

Many approaches rely on information-theoretic techniques to model and analyze privacy-accuracy tradeoff. Most of these information-theoretic models focus mainly on collective privacy for all or subsets of the entries of a database, and provide asymptotic guarantees on the average remaining uncertainty per database entry- or equivocation per input variable after the output release. In contrast, the framework studied in the present application provides privacy in terms of bounds on the information leakage that an analyst achieves by observing the released output. We consider the setting described in Fawaz, where a user has two kinds of data that are correlated: some data that he would like to remain private, and some non-private data that he is willing to release to an analyst and from which he may derive some utility, for example, the release of media preferences to a service provider to receive more accurate content recommendations.

The term analyst, which for example may be a part of a service provider's system, as used in the present application, refers to a receiver of the released data, who ostensibly uses the data in order to provide utility to the user. Often the analyst is a legitimate receiver of the released data. However, an analyst could also illegitimately exploit the released data and infer some information about private data of the user. This creates a tension between privacy and utility requirements. To reduce the inference threat while maintaining utility the user may release a "distorted version" of data, generated according to a conditional probabilistic mapping, called "privacy preserving mapping," designed under a utility constraint. In the present application, we refer to the data a user would like to remain private as "private data," the data the user is willing to release as "public data," and the data the user actually releases as "released data." For example, a user may want to keep his political opinion private, and is willing to release his TV ratings with modification (for example, the user's actual rating of a program is 4, but he releases the rating as 3). In this case, the user's political opinion is considered to be private data for this user, the TV ratings are considered to be public data, and the released modified TV ratings are considered to be the released data. Note that another user may be willing to release both political opinion and TV ratings without modifications, and thus, for this other user, there is no distinction between private data, public data and released data when only political opinion and TV ratings are considered. If many people release political opinions and TV ratings, an analyst may be able to derive the correlation between political opinions and TV ratings, and thus, may be able to infer the political opinion of the user who wants to keep it private. Regarding private data, this refers to data that the user not only indicates that it should not be publicly released, but also that he does not want it to be inferred from other data that he would release. Public data is data that the user would allow the privacy agent to release, possibly in a distorted way to prevent the inference of the private data. In one embodiment, public data is the data that the service provider requests from the user in order to provide him with the service. The user however will distort (i.e., modify) it before releasing it to the service provider. In another embodiment, public data is the data that the user indicates as being "public" in the sense that he would not mind releasing it as long as the release takes a form that protects against inference of the private data.

As discussed above, whether a specific category of data is considered as private data or public data is based on the point of view of a specific user. For ease of notation, we call a specific category of data as private data or public data from the perspective of the current user. For example, when trying to design privacy preserving mapping for a current user who wants to keep his political opinion private, we call the political opinion as private data for both the current user and for another user who is willing to release his political opinion. In the present principles, we use the distortion between the released data and public data as a measure of utility. When the distortion is larger, the released data is more different from the public data, and more privacy is preserved, but the utility derived from the distorted data may be lower for the user. On the other hand, when the distortion is smaller, the released data is a more accurate representation of the public data and the user may receive more utility, for example, receive more accurate content recommendations.

In one embodiment, to preserve privacy against statistical inference, we model the privacy-utility tradeoff and design the privacy preserving mapping by solving an optimization problem minimizing the information leakage, which is defined as mutual information between private data and released data, subject to a distortion constraint.

In Fawaz, finding the privacy preserving mapping relies on the fundamental assumption that the prior joint distribution that links private data and released data is known and can be provided as an input to the optimization problem. In practice, the true prior distribution may not be known, but rather some prior statistics may be estimated from a set of sample data that can be observed. For example, the prior joint distribution could be estimated from a set of users who do not have privacy concerns and publicly release different categories of data, that may be considered to be private or public data by the users who are concerned about their privacy.

Alternatively when the private data cannot be observed, the marginal distribution of the public data to be released, or simply its second order statistics, may be estimated from a set of users who only release their public data. The statistics estimated based on this set of samples are then used to design the privacy preserving mapping mechanism that will be applied to new users, who are concerned about their privacy. In practice, there may also exist a mismatch between the estimated prior statistics and the true prior statistics, due for example to a small number of observable samples, or to the incompleteness of the observable data. The present principles propose methods to design utility-aware privacy preserving mapping mechanisms when only partial statistical knowledge of the prior is available. More precisely, using recent information theoretic results on Maximal (Renyi) correlation, we first provide a separable upper bound on the information leakage, that decouples intrinsic dependencies (that is, dependencies that are inherent to the data) between the private data and the public data to be released, from the designed dependencies (that is, dependencies that are added by design) between the public data to be released and the actual released data. Consequently, we are able to design privacy preserving mapping mechanisms with only partial prior knowledge of the public data to be released, instead of requiring full knowledge of the joint distribution of the private data and public data to be released.

In one embodiment, we characterize the privacy-utility tradeoff in terms of an optimization problem. We also give an upper bound on the probability of inferring private data by observing the released data.

To formulate the problem, the public data is denoted by a random variable X E X with the probability distribution P x . X is correlated with the private data, denoted by random variable S e S. The correlation of S and X is defined by the joint distribution P s x . The released data, denoted by random variable Y E y is a distorted version of X. Y is achieved via passing X through a kernel, P Y \ X . In the present application, the term "kernel" refers to a conditional probability that maps data X to data Y probabilistically. That is, the kernel P Y \ X is the privacy preserving mapping that we wish to design. Since Y is a probabilistic function of only X, in the present application, we assume S→ X→ Y form a Markov chain. Therefore, once we define Ργ\ χ , we have the joint distribution P S X Y = P Y \ X P s ,x and in particular the joint distribution P S Y .

In the following, we first define the privacy notion, and then the accuracy notion.

Definition 1. Assume S→ X→ Y. A kernel P Y \ X is called e-divergence private if the distribution P S Y resulting from the joint distribution P S X Y = P Y \ X P s ,x satisfies

D (Ps,y \ (1 )

where D (. ) is the K-L divergence, E(. ) is the expectation of a random variable, H(. ) is the entropy, e e [0,1] is called the leakage factor, and the mutual information I(S; Y) represents the information leakage. We say a mechanism has full privacy if e = 0. In extreme cases, e = 0 implies that, the released random variable, Y, is independent from the private random variable, S, and e = 1 implies that S is fully recoverable from Y (S is a deterministic function of Y). Note that one can assume Y is completely independent from S to have full privacy (e = 0), but, this may lead to a poor accuracy level. We define accuracy as the following.

Definition 2. Let d: X x y→ R + be a distortion measure. A kernel P Y \ X is called inaccurate if E[d(X, Y)]≤ D . It should be noted that any distortion metric can be used, such as the

Hamming distance if X and Y are binary vectors, or the Euclidian norm if X and Y are real vectors, or even more complex metrics modeling the variation in utility that a user would derive from the release of Y instead of X. The latter could, for example, represent the difference in the quality of content recommended to the user based on the release of his distorted media preferences Y instead of his true preferences X.

There is a tradeoff between leakage factor, e, and distortion level, D , of a privacy preserving mapping. In one embodiment, our objective is to limit the amount of private information that can be inferred, given a utility constraint. When inference is measured by information leakage between private data and released data and utility is indicated by distortion between public data and released data, the objective can be mathematically formulated as to find the probability mapping P Y \ X that minimizes the maximum information leakage 7(5; Y) given a distortion constraint, where the maximum is taken over the uncertainty in the statistical knowledge on the distribution P s x available at the privacy agent:

min max 7(S; Y), s. t. E[d(X, Y)]≤ D.

The probability distribution P S Y can be obtained from the joint distribution PS,X,Y = Py\xPs,x = Py\xPs\xPx- Depending on the knowledge of the statistics, the optimization problem can be written in different ways: (1 ) when the joint distribution P s x is known (no remaining uncertainty on

P s x ), the privacy preserving mapping P Y \ X is the solution to the following optimization problem:

min 7 (5; Y), s. t. E[d(X, Y)]≤ D. (2) when the marginal distribution P x is known, but not the joint distribution P s x , the privacy preserving mapping P Y \ X is the solution to the following optimization problem:

min max / (S; 7), s. t E[d(X, Y)]≤ D.

P Y\X Ps\x

(3) when neither the joint distribution P s x nor the marginal distribution P x is known (full uncertainty on P s x ), the privacy preserving mapping P Y \ X is the solution to the following optimization problem:

min max / (S; 7), s. t. E[d(X, Y)] < D.

P Y\X Ps.x

Problems (1 ) to (3) describe settings with increasing uncertainty, that is, decreasing knowledge, on the joint statistics of S and X. It should be noted that the amount of statistical knowledge available on S and X affects the amount of distortion required to meet a certain level of privacy (for example, a target leakage factor). More precisely, in any of the three problems above, the same range of leakage factors can be achieved, however for a given leakage factor, mappings obtained by solving problems with less statistical knowledge may lead to higher distortion.

Similarly, if one fixes the amount of distortion allowed (D), mappings obtained in settings with less statistical knowledge may have a higher leakage factor. In summary, the more knowledge about the joint statistics of S and X is available, the better the privacy-accuracy tradeoff that can be achieved.

In the following, we discuss in further detail how to solve the optimization problem under different knowledge of statistics. Joint distribution P s x is known

For a given joint distribution P s x , the optimum privacy preserving mapping is characterized as the kernel, achieving the minimum objective of

min / (S; Y)

Ργ\χ

s. t. E^Qf, Y)] < D, P Y \x is a valid conditional distribution. (2)

This optimization problem is introduced in Fawaz, where it is shown to be a convex optimization. Therefore, the optimization problem can be solved by available convex solver or interior-point methods.

The minimum objective of Eq. (2) is denoted by 7,(7)). A privacy preserving mapping is called (e, D) - divergence-distortion private if its leakage factor and expected distortion are not greater than e and D, respectively. Next, we provide an example of the optimization given in Eq. (2) and its solution. Example 1. Assume S has a Bern( ) distribution and X is the result of S passing through a BSC(p) channel (assume p < ^). Assume the distortion measure is

Hamming distortion, i.e., P[X≠ Y]≤ D. Note that using the kernel P Y \ X given by Y = X 0 Z, where Z has a Bern(D) distribution, we achieve 7(5; Y) = l - h p * D), where p * D = p(l - D) + (1 - p)D and h(. ) denote the entropy of a Bernoulli random variable. Next, we show that the minimum objective of Eq. (2) is 1 - h(p * D). We have 7(S; Y) = 77(S) - H(S\Y) = 1 - 7 (S 0 Y\Y)≥ 1 - 77(S 0 Y). Using Markov property, it is straightforward to obtain P[S 0 Y = 1]≤ p(l - D) + (1 - ρ)7λ Therefore, the minimum objective of Eq. (2) is 1 - h(p * D). Assume we want to have full privacy. Full privacy is not possible except in two cases:1 ) v = \ > implying X is independent from S. In this case, there is no privacy problem to begin with. 2)

D = -, implying Y is independent from X. In this case, full privacy implies no utility may be provided to a user for services received based on the released data.

One natural and related question is whether a privacy preserving mapping which is designed to minimize information leakage by solving the optimization problem as shown in Eq. (2), also provides guarantees on the probability of correctly inferring S from the observation of Y, using any inference algorithm. Next, we show a lower bound on the error probability in inferring S from Y, based on the information leakage, using any inference algorithm.

Proposition 1. Assume the cardinality of S, \S\ > 2 and 7(5; 7) < <?77(S). Let s be an estimator of S based on the observation Y (possibly randomized). We have

(1 - e)H(S - 1

¾ = F[5(0 5] > oa ( ^_ J 1) . (3) For \S\ = 2, we have h(P e )≥ (1 - e)H(S).

Proof: From Fano's inequality, we have P e (log ( \S\ - 1)) > H(S\Y) - h(P e ).

Since 7(7; S) = 7 (S) - H(S\Y ≤ eH(S), we have H(S\Y ≥ (1 - e H(S). Therefore,

(1 - e)H(S) - h(P e ) > (1 - e)H(S) - 1

e ~ log ( \S\ - l) - log ( \S\ - l) '

The proof when \S\ = 2 is similar. □

Thus, no matter the inference algorithm used by the analyst to infer S from the observation Y, the inference algorithm will incorrectly infer the private data as

S(Y)≠ S with probability at least ^^j^ "1 . In other words, The success probability of any inference algorithm to correctly infer the private data as S is at most 1 -

^ i og ( i^ ) 1 ' wn ' cn ' s bounded away from 1 . The smaller e, the higher the probability that the inference algorithm will be incorrect in the inference of the private data. In the extreme case where e = 0, perfect privacy is achieved, and no inference algorithm can perform better than an uninformed random guess.

Joint distribution P s x is unknown

In practice, we may not have access to the joint probability distribution P sx . Therefore, finding the exact optimal solution of the optimization problem (2) may not be possible. In particular, we may only know the probability measure, P x , and not P s x . In this case, the privacy preserving mapping is the kernel P Y \ X , minimizing the following optimization problem

min max / (S; Y)

P Y\X p s\x

s. t. EfdCA " , Y)] < D, P Y \ X is a valid conditional distribution. (4) In the following, we propose a scheme to achieve privacy (i.e., to minimize information leakage) subject to the distortion constraint, based on some techniques in statistical inference, called maximal correlation. We show how we can use this theory to design privacy preserving mappings without the full knowledge of the joint probability measure P sx . In particular, we prove a separability result on the information leakage: more precisely, we provide an upper bound on the information leakage in terms of l(S; X) times a maximal correlation factor, which is determined by the kernel, P Y \ X . This permits formulating the optimum mapping without the full knowledge of the joint probability measure P sx .

Next, we provide a definition that is used in stating a decoupling result. Definition 3. For a given joint distribution P X Y , let S * (X; Y) = sup r(x)≠p(x) ^ ^||^^, where r(y) is the marginal measure of p(y\x)r(x) on Y. Note that S * (X Y) < 1 because of data processing inequality for divergence. The following is a result of an article by V. Anantharam, A. Gohari, S. Kamath, and C. Nair, "On maximal correlation, hypercontractivity, and the data processing inequality studied by erkip and cover," arXiv preprint arXiv: 1304.6133, 2013

(hereinafter "Anantharam").

Theorem 1. If S→ X→ Y form a Markov chain, the following bound holds:

I(S; Y)≤S * (X; Y)I(S; X), (6) and the bound is tight as we vary S. In other words, we have

S:S→X→Y 1 β Λ )

assuming I(S; X)≠ 0.

Theorem 1 decouples the dependency of Y and S into two terms, one relating S and X, and one relating X and Y. Thus, one can upper bound the information leakage even without knowing P s x , by minimizing the term relating X and Y. The application of this result in our problem is described in the following.

Assume we are in a regime that P s x is not known and l(S; X)≤ Δ for some Δ G [0, H(S)]. I(S; X) is the intrinsic information embedded in X about S, which we do not have control on. The value of Δ does not affect the mapping we will find, but the value of Δ affects what we think is the privacy guarantee (in term of the leakage factor) resulting from this mapping. If the Δ bound is tight, then the privacy guarantee will be tight. If the Δ bound is not tight, we may then be paying more distortion than is actually necessary for a target leakage factor, but this does not affect the privacy guarantee.

Using Theorem 1 , we have min max / (S; Y = min max max / (S; Y) < Δ I min max S * (X; Y) ).

Ργ\χ Ps,x Ργ\χ Ρχ Ps\x \Ργ\χ Ρχ J

Therefore, the optimization problem becomes to find P Y \ X , minimizing the following objective function:

min max S * {X; Y

Ργ\χ Ρχ

s. t. E[d(X, Y ]≤ D. (8)

In order to study this optimization problem in more detail, we review some results in maximal correlation literature. Maximal correlation (or Renyi correlation) is a measure of correlation between two random variables with applications both in information theory and computer science. In the following, we define maximal correlation and provide its relation with S * (X; Y).

Definition 4. Given two random variables X and Y, the maximal correlation of (X, Y) is

where T is the collection of pairs of real-valued random variables f(X) and g(Y) such that E[/( )] = E[g(Y)] = 0 and E[/( ) 2 ] = E[g(Y) 2 ] = 1.

This measure was first introduced by Hirschfeld (H. O. Hirschfeld, "A connection between correlation and contingency," in Proceedings of the Cambridge Philosophical Society, vol. 31 ) and Gebelein (H. Gebelein, "Das statistische Problem der Korrelation als Variations- und Eigenwert-problem und sein Zusammenhang mit der Ausgleichungsrechnung," Zeitschrift fur angew. Math, und Mech. 21 , pp. 364-379 (1941 )), and then studied by Renyi (A. Renyi, "On measures of dependence," Acta Mathematica Hungarica, vol. 10, no. 3). Recently, Anantharam et al. and Kamath et al. (S. Kamath and V. Anantharam, "Non-interactive simulation of joint distributions: The hirschfeld-gebelein-renyi maximal correlation and the hypercontractivity ribbon," in Communication, Control, and Computing (Allerton), 2012 50th Annual Allerton

Conference on, hereinafter "Kamath") studied the maximal correlation and provided a geometric interpretation of this quantity. The following is a result of an article by R. Ahlswede and P. Gacs, "Spreading of sets in product spaces and hypercontraction of the markov operator," The Annals of Probability (hereinafter "Ahlswede"):

max p^ j (X; Y) = max S * (X; Y). (1 Q)

Substituting (10) in (8), the privacy preserving mapping is the solution of min max p 2 ^ (X; Y)

ργ\χ Ρχ

s. t. E[d(X; Y ]≤ D. (1 1 )

It is shown in an article by H. S. Witsenhausen, "On sequences of pairs of dependent random variables," SIAM Journal on Applied Mathematics, vol. 28, no. 1 that, maximal correlation, p m (X; Y) is characterized by the second largest singular value of the matrix Q with entries Q x v = , P(x,y) . The optimization problem can be solved by power iteration algorithm or Lanczos algorithm for finding singular values of a matrix. The two quantities S*(X Y) and Pm(X; Y) are closely related. Two sufficient conditions under which S*(X Y) = ρ^{Χ Y) are given in Theorem 7 of Ahlswede. Next, we provide an example of such case.

Example 2. Let X ~ Bern( ) and Y = X + N (mod 2), where N ~ Bern D) and X is independent of N {X ]} N). It is shown in Kamath that, S* (X; Y) = p^(X; Y) = (l - 2D) 2 . Using this bound where S ~ Bern(-), X = S + Bern(p), and Y = X + Bern(D), we obtain I(S; Y ≤ (1 - 2D) 2 (1 - h(p . Compare this to what we showed in Example 1 : 1(S; Y) = 1 - h(p * D). Here, (1 - 2D) 2 is the injected privacy term obtained by the kernel P Y \ X and 1 - h(p) is the intrinsic information/privacy term, quantifying the relation between X and S. Marginal distribution P v is known, but not the joint distribution Ρ¾ ν

Next, we consider the case where only the marginal distribution P x is known but not the joint distribution P s x . We wish to design P Y \ X . Assume that, |X| = \y \ = n. The optimization problem in Eq. (8) becomes min S * (X; Y)

Ργ\χ

s. t. E[d(X; Y ]≤ D. (12) Now, consider the following optimization problem by replacing S * (X Y) with Pm(X; Y).

min p^ X ; Y)

P Y\X

s. t. E[d(X; Y)] < D. (13)

We solve this optimization problem and if the final solution satisfies S * (X Y) = Pm(X; Y), then we have the solution to (12). In particular, if one of the conditions given in Ahlswede holds, then we have the solution to (12). Next, we reformulate the constraint set in (13).

Theorem 2. Given a distribution P x , let J X denote a vector with entries equal to square root of entries of P x . If Q is a n x n matrix satisfying the following constraints: 1) Q≥ 0 (entry-wise), 2) NQS/P h = 1, and 3) QQ*J = ¾, then P Y[X (and P xx ) can be found uniquely such that Q x y =

Proof: Since Q≥0 and > 0, we have > 0. On the other hand since we have \ \ square root of a probability distribution denoted by -Jfy. Let P XiY (i,j) = Q We claim that this P XY is a joint probability distribution consistent with P x and P Y . Using the assumptions, we have

∑ij Ρχ,γ tt = 1- Therefore, the defined Ρ χγ is a probability measure (using assumption 1, the entries are non-negative). Next, we show that P X Y is consistent with P Y . We have

Σί Ρχ,γ = JPY(j) (∑i Q (i Px(.Q) = PY ) - Similarly, ρ χγ is consistent with Px- □

Theorem 2 shows that we can rewrite the optimization problem (13) as

min λ 2 (Q) Q- QQ t P ~ x = P ~~ x, IIQy¾| 2 = l

E[d(X; Y ]≤D, Q≥ 0(entry - wise), (14) where A 2 (Q) denotes the second largest singular value of Q and expectation is over the joint probability induced by matrix Q. Note that the constraints are quadratic in the entries of Q. As an example of distortion constraint,

W[X = Y] = tr(D( /¾QD(() t /¾) > 1 - D is quadratic in Q , where T)( ) is a diagonal matrix with entries of v on the diagonal. Once we find Q, then we obtain P Y \ X . Again, this optimization can be solved by power iteration algorithm or Lanczos algorithm.

FIG. 1 illustrates an exemplary method 100 for distorting public data to be released in order to preserve privacy according to the present principles. Method 100 starts at 105. At step 1 10, it collects statistical information based on released data, for example, from the users who are not concerned about privacy of their public data or private data. We denote these users as "public users," and denote the users who wish to distort public data to be released as "private users."

The statistics may be collected by crawling the web, accessing different databases, or may be provided by a data aggregator, for example, by bluekai.com. Which statistical information can be gathered depends on what the public users release. For example, if the public users release both private data and public data, an estimate of the joint distribution P s x can be obtained. In another example, if the public users only release public data, an estimate of the marginal probability measure P x can be obtained, but not the joint distribution P s x . In another example, we may only be able to get the mean and variance of the public data. In the worst case, we may be unable to get any information about the public data or private data.

At step 120, it determines a privacy preserving mapping based on the statistical information given the utility constraint. As discussed before, the solution to the privacy preserving mapping mechanism depends on the available statistical information. For example, if the joint distribution P s x is known, the privacy preserving mapping may be obtained using Eq. (2); if the marginal distribution P x is known, but not the joint distribution P s x , the privacy preserving mapping may be obtained using Eq. (4); if neither the marginal distribution P x nor joint distribution P s x is known, the privacy preserving mapping P Y \ X may be obtained using Eq. (8). At step 130, the public data of a current private user is distorted, according to the determined privacy preserving mapping, before it is released to, for example, a service provider or a data collecting agency, at step 140. Given the value X = x for the private user, a value Y = y is sampled according to the distribution P Y \ X=X . This value y is released instead of the true x. Note that the use of the privacy mapping to generate the released y does not require knowing the value of the private data S = s of the private user. Method 100 ends at step 199.

FIGs. 2-4 illustrate in further detail exemplary methods for preserving privacy when different statistical information is available. Specifically, FIG. 2 illustrates an exemplary method 200 when the joint distribution P s x is known, FIG. 3 illustrates an exemplary method 300 when the marginal probability measure P x is known, but not joint distribution P s x , and FIG. 4 illustrates an exemplary method 400 when neither the marginal probability measure P x nor joint distribution P s x is known. Methods 200, 300 and 400 are discussed in further detail below.

Method 200 starts at 205. At step 210, it estimates joint distribution P s x based on released data. At step 220, it formulates the optimization problem as Eq. (2). At step 230, it determines a privacy preserving mapping based on Eq. (2), for example, solving Eq. (2) as a convex problem. At step 240, the public data of a current user is distorted, according to the determined privacy preserving mapping, before it is released at step 250. Method 200 ends at step 299.

Method 300 starts at 305. At step 310, it formulates the optimization problem as Eq. (8) via maximal correlation. At step 320, it determines a privacy preserving mapping based on Eq. (8), for example, solving Eq. (8) using power iteration or Lanczos algorithm. At step 330, the public data of a current user is distorted, according to the determined privacy preserving mapping, before it is released at step 340. Method 300 ends at step 399.

Method 400 starts at 405. At step 410, it estimates distribution P x based on released data. At step 420, it formulates the optimization problem as Eq. (4) via maximal correlation. At step 430, it determines a privacy preserving mapping based on Eq. (12) , for example, by solving the related Eq. (14) using power iteration or Lanczos algorithm. At step 440, the public data of a current user is distorted, according to the determined privacy preserving mapping, before it is released at step 450. Method 400 ends at step 499.

A privacy agent is an entity that provides privacy service to a user. A privacy agent may perform any of the following:

- receive from the user what data he deems private, what data he deems public, and what level of privacy he wants;

- compute the privacy preserving mapping;

- implement the privacy preserving mapping for the user (i.e. , distort his data according to the mapping); and

- release the distorted data, for example, to a service provider or a data collecting agency.

The present principles can be used in a privacy agent that protects the privacy of user data. FIG. 5 depicts a block diagram of an exemplary system 500 where a privacy agent can be used. Public users 510 release their private data (S) and/or public data (X). As discussed before, public users may release public data as is, that is, Y = X. The information released by the public users becomes statistical information useful for a privacy agent.

A privacy agent 580 includes statistics collecting module 520, privacy preserving mapping decision module 530, and privacy preserving module 540. Statistics collecting module 520 may be used to collect joint distribution P s x , marginal probability measure P x , and/or mean and covariance of public data.

Statistics collecting module 520 may also receive statistics from data aggregators, such as bluekai.com. Depending on the available statistical information, privacy preserving mapping decision module 530 designs a privacy preserving mapping mechanism P Y \ X , for example, based on the optimization problem formulated as Eq. (2), (8), or (12). Privacy preserving module 540 distorts public data of private user 560 before it is released, according to the conditional probability P Y \ X . In one embodiment, statistics collecting module 520, privacy preserving mapping decision module 530, and privacy preserving module 540 can be used to perform steps 1 10, 120, and 130 in method 100, respectively.

Note that the privacy agent needs only the statistics to work without the knowledge of the entire data that was collected in the data collection module. Thus, in another embodiment, the data collection module could be a standalone module that collects data and then computes statistics, and needs not be part of the privacy agent. The data collection module shares the statistics with the privacy agent.

A privacy agent sits between a user and a receiver of the user data (for example, a service provider). For example, a privacy agent may be located at a user device, for example, a computer, or a set-top box (STB). In another example, a privacy agent may be a separate entity.

All the modules of a privacy agent may be located at one device, or may be distributed over different devices, for example, statistics collecting module 520 may be located at a data aggregator who only releases statistics to the module 530, the privacy preserving mapping decision module 530, may be located at a "privacy service provider" or at the user end on the user device connected to a module 520, and the privacy preserving module 540 may be located at a privacy service provider, who then acts as an intermediary between the user, and the service provider to whom the user would like to release data, or at the user end on the user device.

The privacy agent may provide released data to a service provider, for example, Comcast or Netflix, in order for private user 560 to improve received service based on the released data, for example, a recommendation system provides movie recommendations to a user based on its released movies rankings. In FIG. 6, we show that there are multiple privacy agents in the system. In different variations, there need not be privacy agents everywhere as it is not a requirement for the privacy system to work. For example, there could be only a privacy agent at the user device, or at the service provider, or at both. In FIG. 6, we show that the same privacy agent "C" for both Netflix and Facebook. In another embodiment, the privacy agents at Facebook and Netflix, can, but need not, be the same.

In the following, we compare and show the relationship between different existing privacy metrics, in particular divergence privacy, differential privacy, and information privacy. We provide examples on the differences in the privacy-accuracy tradeoffs achieved under these different notions. We show that using divergence privacy, the present principles advantageously guarantee a small probability of inferring private data based on the released data (Proposition 1 ).

Definition 5. • Differential privacy: For a given e, P Y \ S is e - differentially private if

SU P or c < n ≤ e · ( 5) y,s,s s~s P(y A \s ) for any measurable set A, where 5 - 5' denotes that, 5 and 5' are neighbors. The notion of neighboring can have multiple definitions, e.g., Hamming distance 1 (differ in a single coordinate), or ( p distance below a threshold. In the present application, we use the former definition.

• Strong differential privacy: For a given e, P Y \ S is e -strongly differential private if

P(y≡A\s)

3 ¾ ≤e ' (16) for any measurable set A and 5 and s'. This definition is related to local differential privacy. This is stronger than differential privacy, because we relaxed the

neighboring assumption.

• Information privacy: For a given e, P Y \ S is e -information private if

PCs G B \y G A)

e ~e < n r ' „ J < e e , (17)

P(s E B) ' for any measurable sets A and B.

• Worst-case divergence privacy: For a given e, P Y \ S is worst-case e -divergence private if

H(S) - min H S\Y = y) = eH(S) ( 1 8) · (e, S)-differential privacy: For any given e and δ, P Y \ S is (e, δ) - differentially private if

P y G A\s)≤ P(y G A\s)e e + δ, (19) for any measurable set A and neighboring 5 and s . Next, we compare the definitions given above.

Proposition 2. We have the following relation between the privacy metrics, where "=>" means "imply," that is, it means that the right side follows form the left side. e - strong differential privacy = e - information privacy

e - information privacy = 2e - strong differential privacy

e - information privacy = - worst-case divergence privacy

- - - worst-case divergence privacy = - - - divergence privacy

H(S) H(S)

e - differential privacy = (e, S) - differential privacy for any δ≥ 0.

Proposition 2 is summarized in FIG. 7. In the following, we give two examples comparing differential privacy with divergence privacy. In the first example, we focus on the probability of recovering the private data given that we satisfy these notions of privacy.

Considering the particular case of counting query, we show that, using differential privacy, full detection of the private data is possible. On the other hand, using divergence privacy, the probability of detecting the private data is small.

Example 3. Let S t , ... , S n be binary correlated random variables and let X =∑ =1 S;. Assume S t , ... , S n are correlated in a way that, S t ■■■ ≥ S n . Therefore, knowing X, we can exactly recover S = (S t , ... , S n ). Also, assume S t s (1≤ i≤ n) are correlated in a way that P(X = ki) = 1+ ^ /fc , for i e {0,1, - , n/k} (assume, n = 0 mode k). P(Y\S) is e - differentially private if we add Laplacian noise to X, i.e., Y = X + Lap(^). Fix e and let n = k k , where k goes to infinity. It is shown that error probability in detecting X (and S) is approximately P e = e 2 t which is very small for large enough k. Thus, differential privacy does not guarantee a small probability of detecting S. Note that, the divergence privacy factor is approximately = 1 - e ~ , which is very close to

H(S)

one and this is the reason for large detection probability. P(Y\S) is e - divergence private if we add Gaussian noise instead of Laplacian noise, with a variance chosen appropriately as follows. The variance of the Gaussian noise depends on the correlation in the data S via the variance of X, σ . We have σ « ~k 2fe , where « denotes that, the ratio goes to 1 as k goes to infinity. Let N be a Gaussian distribution with a variance satisfying; « / 2e(fe_1) . Adding this noise to X, the leakage factor is less than or equal to e. Moreover, P e (1~g) ' ; °^ ( ^ +7 fc) — > 1— e.

That is, the probability of detecting private data is very small using divergence privacy.

The implementations described herein may be implemented in, for example, a method or a process, an apparatus, a software program, a data stream, or a signal. Even if only discussed in the context of a single form of implementation (for example, discussed only as a method), the implementation of features discussed may also be implemented in other forms (for example, an apparatus or program). An apparatus may be implemented in, for example, appropriate hardware, software, and firmware. The methods may be implemented in, for example, an apparatus such as, for example, a processor, which refers to processing devices in general, including, for example, a computer, a microprocessor, an integrated circuit, or a programmable logic device. Processors also include communication devices, such as, for example, computers, cell phones, portable/personal digital assistants ("PDAs"), and other devices that facilitate communication of information between end-users.

Reference to "one embodiment" or "an embodiment" or "one implementation" or "an implementation" of the present principles, as well as other variations thereof, mean that a particular feature, structure, characteristic, and so forth described in connection with the embodiment is included in at least one embodiment of the present principles. Thus, the appearances of the phrase "in one embodiment" or "in an embodiment" or "in one implementation" or "in an implementation", as well any other variations, appearing in various places throughout the specification are not necessarily all referring to the same embodiment.

Additionally, this application or its claims may refer to "determining" various pieces of information. Determining the information may include one or more of, for example, estimating the information, calculating the information, predicting the information, or retrieving the information from memory. Further, this application or its claims may refer to "accessing" various pieces of information. Accessing the information may include one or more of, for example, receiving the information, retrieving the information (for example, from memory), storing the information, processing the information, transmitting the information, moving the information, copying the information, erasing the information, calculating the information, determining the information, predicting the information, or estimating the information.

Additionally, this application or its claims may refer to "receiving" various pieces of information. Receiving is, as with "accessing", intended to be a broad term. Receiving the information may include one or more of, for example, accessing the information, or retrieving the information (for example, from memory). Further, "receiving" is typically involved, in one way or another, during operations such as, for example, storing the information, processing the information, transmitting the information, moving the information, copying the information, erasing the information, calculating the information, determining the information, predicting the information, or estimating the information.

As will be evident to one of skill in the art, implementations may produce a variety of signals formatted to carry information that may be, for example, stored or transmitted. The information may include, for example, instructions for performing a method, or data produced by one of the described implementations. For example, a signal may be formatted to carry the bitstream of a described embodiment. Such a signal may be formatted, for example, as an electromagnetic wave (for example, using a radio frequency portion of spectrum) or as a baseband signal. The formatting may include, for example, encoding a data stream and modulating a carrier with the encoded data stream. The information that the signal carries may be, for example, analog or digital information. The signal may be transmitted over a variety of different wired or wireless links, as is known. The signal may be stored on a processor-readable medium.