The UK goes every ten years through a national census, where every household is called to fill in details about their demographics, habits, travel and income. The next one will be the UK 2011 census.

The office for national statistics has a statutory duty to ensure that the data released from this census cannot be used to identify any individual or to infer any of unknown attribute. Techniques for doing so are called statistical disclosure control, and have been the subject of intense study for the last 40 years at least. One could never have guessed by reading the documents on confidentiality for the next UK census.

To make a long story short: the ONS never considered modern well defined notions of privacy, it lacks a reliable evaluation framework to establish the degree of risk of different methods (let alone utility), and has proposed disclosure control measures that fall rather short of the state of the art.

Moving households around (a bit)

The consultation is not totally over yet, but the current favorite after two rounds of evaluation seems to be a technique called “Record Swapping”. How does it work? The technique takes the database of all responses to the census and outputs another database, that is sufficiently different to avoid identification and inference. Record swapping first categorises all records by the household size, sex, broad age, and hard-to-count variables. Then it selects 2-20% of the records, and each of them are paired with a record from the same category. Then the geographical data of each pair of records (yes, right, only the geographical data) are swapped.

This procedure has the effect to disperse geographically the population a bit so that, it is not possible to know whether single cells in tables are indeed providing information about an individual in a region or, whether they are the product of a swap from a different region. The advantage is that the totals are the same (since swapping things around is invariant to addition), the swaps are with “similar” households, and the procedure is simple to implement.

This is in-line with the definition of privacy of the census office, namely that: 

“The Registrars General concluded that the Code of Practice statement can be met in relation to census outputs if no statistics are produced that allow the identification of an individual (or information about an individual) with a high degree of confidence. The Registrars General consider that, as long as there has been systematic perturbation of the data, the guarantee in the Code of Practice would be met.”

Problems with “Record Swapping”

So far a whole process has been followed to evaluate a list of proposed disclosure control measures, present a methodolody to evaluate them, shortlist some, and perform more in-depth research about their utility and privacy. There is a lot of repetition in these documents, a few ad-hoc indicators of quality and privacy, and no security analysis what-so-ever about inference attacks on the proposed schemes. The subject of ” disclosure by differencing” is left as a suggestion for future work in the latest interim report, while the only method left on the list is Record Swapping, as well as ABS, that has apparently not been tested yet at all.

Why is that a problem? Records include many other potentially identifying fields aside from location. Since the rest of the record stand as it is, and is aggregated into tables, with a secret small cell adjustment technique, we cannot really be sure at all that there are no re-identification attacks. (Apparently revealing the details of the technique cannot be divulged for confidentiality reasons, violating even the most basic principle of security engineering! See page 3).

The utility measures used to assess how acceptable these disclosure control measures will be to data users (Shlomo et al.), are themselves very simplistic and do not offer very tight bounds on possible errors but I will leave this matter for the statisticians to blog about.

To make the problem worse, this time the ONS, is seriously thinking of allowing data users to submit their own queries to the database of statistics. The queries are not likely to be full SQL any time soon, but tables on 3 categories (called cubes) are likely to be allowed. This leaves wide open quite a range of attacks in the literature on inference in statistical databases.

At this point there is absolutely no evidence that the disclosure control scheme is actually secure, which in security engineering means that it is probably not.

How did we get to this situation?

It seems the bulk of the work on disclosure control has been done by the ONS, in conjunction with researchers from the University of Southampton. None of the authors of any of the evaluations has a substancial research experience in privacy technology or theoretical computer security that deals with these privacy matters in a systematic way.

What is revealing is the fact that the most relevant related work is never mentioned. It includes:

  • The work of Denning on trackersand inference in statistical databases (1980). Instead the archaic term “differencing” is used.
  • The work of Sweeney and Samarati on linkage attacks and k-anonymity (1997).
  • The work of Dwork on Differential Privacy (2007), which is the most current and strongest definition of privacy for statistical databases.

These works show repeatedly that ad-hoc inference control measures, that only aim to suppress a handful of known and obvious attacks, are systematically bypassed.

Dwork in her work on Differential Privacy (that won the 2009 year’s PET Award) provides clear arguments on why simpler ad-hoc techniques cannot provide the same guarantee of privacy: their results can be aggregated with side information known to the adversary to facilitate inference. Differential privacy on the other hand guarantees that the results of a query to the database, or published table, reveals no more information when composed with other such queries or any side information. 

This is a hot topic in research today, and all the details may not be ready for a census in 2 years time. This does not justify the ONS’s ignorance of this field.

Micah Sherr presented at PETS a few days ago his work on “Scalable Link-Based Relay Selection for Anonymous Routing“. The key idea is that paths are generated by taking into account the network performance of each link to be used. The overhead of distributing performance information can be reduced by associating with each server a network coordinate, that allows to estimate the latency between pairs of nodes.

This is a pure path selection proposal, as quite a few have appeared in the past year to reduce latency, or increase node utilization in Tor. The question with all those proposals is: how much anonymity would these path selection strategies provide?

The methodology we present in  “The Bayesian Traffic Analysis of Mix Networks” provides a way of answering such questions, by carefully modelling the path selection strategy. Applying the same methodology to these path selection proposals would be of clear benefit, and an excellent project for anyone interested in understanding better how to apply inference based techniques to traffic analysis.

Today morning at PETS 2009, the paper on “Physical Layer Attacks on Unlinkability in Wireless LANs” was presented. The idea is that despite all anonymization techniques at the logical layers, though IP address modulation, the physical location of the IEEE802.11 transmitters can be localised, and thus unlinkable packets emanating from it linked together.

The approach used to do this, uses signal strength and triangulation techniques, with a machine learning twist, to cluster together emissions and link them to the same transmitter. A set of countermeasures is also presented, where transmitters modulate their signal strength to foil this clustering.

The attacker model was restricted to using commodity hardware, so physical device fingerprinting attacks were not considered.

In the last year, we have been developping a set of systematic techniques to analyse anonymity systems, to perform traffic analysis. These cast the problem of traffic analysis as a Bayesian inference problem, where the adversay observes some traces, according to a threat model, and then has to infer the hidden state of the system, that is equivalent to tracing who is talking to whom.

So far we have looked at the analysis of mix networks, the analysis of Crowds, and a Bayesian approach to long term intersection attacks. The papers describing each of these are available online:

I just sat thought the first session of PET2009, that was about privacy policies where two really interesting pieces of research were presented.

Ram presented a work on “Capturing Social Networking Privacy Preferences” [pdf], where he proposes to infer automatically privacy policies for social networks, and present them as templates or starting points for users to define their own policies. The methodology used is really neat: they record the location of a number of users, and every night they ask the users whether they would be happy to share their locations with different circles of theirs. Then they try to extract a set of standard policies, based on time, location, and the type of contact that can see your location.

The second study, presented by Aleecia, is on how easy and pleasing is to read privacy policies (“A Comparative Study of Online Privacy Policies and Formats“). They find that privacy policies in different formats are more or less easy to read and understand, but across the board privacy policies are difficult to understand, easy to misunderstand, and totally unpleasant to read.

In the 1950′s Lambros D. Callimahos built the Zendian Problem [zip] as an integrated exercise in traffic analysis, cryptanalysis, and communications intelligence operations. The state of cryptology today has moved on, beyond the point where an analyst can rely on plaintext to drive operations. The state of traffic analysis techniques, and the availability of more computing power, requires a new generation of exercises to sharpen the tools and minds of those in the field.

Steven Murdoch and myself have been developing, on (and mostly) off, over the past year an exercise in traffic analysis, and in particular the long term disclosure attacks. The exercise was first presented and used at the Brno FIDIS summer school, and we are now using it as part of an industrial training curriculum.

The exercise consists of an anonymized trace of communications, that were mediated by an anonymity system, that a group of people used to message each other. The message traces are synthetic,  but generated based on a real-world social network. Users have favorite communication partners, talk more or less according to the type of relationship and time of day, and may reply to each others messages.

The goal is to apply any disclosure attacks, and de-anonymize and trace as many messages as possible. An oracle is provided that outputs the success rate, and the instructor’s pack includes the original messages as well as the scripts used to simulate the messaging behaviours and the anonymization layer. We tried to keep the exercise and success rates realistic, so do not expect to ever get 100% — significantly better than random is already quite good.

The richness of the messaging behaviour is designed to stress the most advanced statistical disclosure techniques, that make use of social network, replies, and perfect matchings. The literature on statistical disclosure can be found on the exercise page, and an example implementing the simple SDA is provided in the bundle. The family of Disclosure Attacks (devised by Kesdoganet al.) might also be modified and applied to the exercise. Our new attack soon to be presented at PET, using Bayesian Inference, could also be applied:

A couple of caveats: this is an exercise, to help people learn about long term traffic analysis attacks, and allow them to implement the attacks on a rich, but safe, dataset. The objective is to learn.

  1. It is not a benchmarking tool between attacks. We are not sure that the traffic patterns are typical enough to make sure that when an attack performs better in the setting of our exercise it would perform better on real data.
  2. It is also not a competition or test. We publish all of the hidden state for instructors, and the random number generator used was not cryptographically strong. The point is not who can get the highest score, but the quality of understanding of the attacks.

Caveats aside, I do hope that the exercise opens a discussion about how we can exchange training or live datasets, formats for evaluating traffic analysis attacks, and a level of standardisation to the interfaces of attack scripts. These will probably be topics for debate over the Privacy Enhancing Technologies Symposium 2009, next week.

Both myself and Steven would be very interested to hear your experiences with the exercise, either if you take it yourself, or give it to a class as an instructor. If you extend the exercise, or generate particular bundles of anonymized datasets, we would also be happy to host them.

Follow

Get every new post delivered to your Inbox.