As with all my scanned notes, this has the usual disclaimer: these posts are just so I can use search engines to easily search for my notes, which are sufficient for me to recall talks and papers but probably not much use to anyone else. Paper here.
Differential privacy is a recent privacy guarantee tailored to the problem of statistical disclosure control: how to publicly release statistical information about a set of people without compromising the privacy of any individual.
Up to this point, research on differentially private data analysis has focused on the setting of a trusted curator holding a large, static, data set, held in a permanently infrangible storage system. In this work we extend differential privacy to two new realms, illustrated with the following scenario. Consider a website for H1N1 self-assessment. Individuals interact with the site to learn whether symptoms they are experiencing are indicative of the H1N1 flu. To this end, the user provides demographic and symptom data.
Continual Observation: How can we continually analyze aggregate user information to monitor regional health conditions, while preserving differential privacy? (Joint work with Naor, Pitassi, and Rothblum.)
Pan-Privacy: How can we ensure that our analysis algorithm is differentially private “inside and out,” protecting users even against legal action or other intrusion on the internal state of the website? (Joint work with Naor, Pitassi, Rothblum, and Yekhanin.)
We will give examples of algorithms achieving these goals separately and in conjunction.
Also, see this survey of results about differential privacy.