I am at PETS 2016 writing this summary in real-time as Tao Wang presents his work with Ian Goldberg on “On Realistically Attacking Tor with Website Fingerprinting“. This is the latest paper addressing challenges identified by the Tor project to make website fingerprinting (WF) research more informative about real-world threats. We have a stake in this since Jamie Hayes from UCL will be presenting a joint paper on k-fingerprinting: a Robust Scalable Website Fingerprinting Technique.

The key problem is that a number of reported website fingerprinting results are evaluated in laboratory conditions. Website fingerprinting considers an adversary looking at an encrypted and anonymized stream, and the task is to recognize which website is being accessed through the protections. However, the evaluation of the attacks consider a restricted and stylized setting, where a single client accesses a single (usually landing) page from a server. This is restrictive: it does not consider accessing multiple pages, it assumed the start and end of pages is known, and that the training data is up to date.

So this paper consider how to deal with such setting. First, how to deal with noisy data? For example, what is another tab creates some continuous traffic, aka noise that may confuse the attacker? In this case the attacker will see a combined stream of the signal from a page load superimposed on the noise. Tao illustrates that as the volume of noise increases the accuracy of attacks radically drops. Trivial solutions such as trying to subtract noise are not likely to succeed.So the actual solution is to train on the noisy version of the stream, and that indeed increases the accuracy of the attack back to dangerous levels.

The second problem is splitting: how can an attacker detect the start and end of loading a specific bundle of resources corresponding to a web page. There can either be a lot of quiet time between loads, no time, or even a negative time when the loads overlap. It turns out when there is a large gap it can be overcome with little effect on the True positive rate — as the split is obvious. However, the problem has a generic solution for other cases: you simply use a classifier to learn where to split, and then apply the classifier. I do wonder if using both incoming and outgoing channels could improve this.

Finally, they address the issue of maintaining a training set, since it is expensive to collect such data continuously. So either you have a small fresh training set, or a larger but out of date training set. There is a trade-off, but Tao demonstrates that even old-ish (10 days) still provides good accuracy. In conclusion, it seems that those roadblocks can be tackled to make WF practical, and there is an increased call to implement defenses.

My meta-conclusion is twofold: first, it is clear that the roadblocks presented first by the tor project blog post are indeed issues, but are not a game stopper for website fingerprinting. It took the community about 2 years to mature into using more modern machine learning (ML) techniques, and this work illustrates this. Yet, my second conclusion is that the maturity of the ML usage for fingerprinting is still low: it is pretty self-evident that to be robust to noise one should train on noisy data — I am glad this is now explicit. It is also self-evident that we can take small data sets of pages and network conditions and synthesize a very large set of training examples to make learning even more robust to noise and better at detecting splits. No one has done this yet — maybe that would be a nice paper for PETs next year.