One of my key annoyances while doing work in privacy technologies is the poor support of key cryptography libraries in my favorite Python programming language. Today, I would like to share my work on building petlib, a more or less pythonic, wrapper around the OpenSSL low level crypto and math libraries, as well as numerous example privacy technologies (PETs) that I have implemented as examples.

The needs of someone doing research in the PETs field are quite different from other developers: on one hand we need access to low level primitives (such as block cipher and hash function operations), as well as low level mathematical functions on big integers and elliptic curves on finite fields. A number of available libraries try to hide those primitives from developers behind abstractions such as “signed envelopes” or “secure sockets” — which so not serve those who try to build different abstractions. On the other hand, issues such a tight memory management and absolute control over other low-level aspects of the library are not essential; in fact a clean programming interface that leads to beautifully clear reference code for proposed protocols is preferable.

The petlib library is available for everyone to use, and after installing the OpenSSL prerequisites can be acquired through the python repositories through:

pip install petlib

The petlib library was used as the basis for teaching the labs of the Privacy Enhancing Technologies course at UCL, and thus has extensive documentations, and is properly version controlled, packaged and tested:

The best way to get a feel for how the library can be used to build PETs prototypes is to browse the examples in the source tree:

In terms of more real-work research project, we have already used petlib for implementing prototypes for a few projects and labs:

One key missing component from the underlying OpenSSL crypto library is support for computations on pairings of elliptic curves. This limits the types of protocols that can currently be implemented with petlib, until such functionality becomes available in the underlying libraries (please contribute!) Bug reports and pull requests with fixes to the code and documentation are very welcome.

Last term I had the opportunity and pleasure to prepare and teach the first course on Privacy Enhancing Technologies (PETs) at University College London, as part of the MSc in Information Security.

The course covers principally, and in some detail, engineering aspects of PETs and caters for an audience of CS / engineering students that already understands the basics of information security and cryptography (although these are not hard prerequisites). Students were also provided with a working understanding of legal and compliance aspects of data protection regimes, by guest lecturer Prof. Eleni Kosta (Tilburg); as well as a world class introduction to human aspects of computing and privacy, by Prof. Angela Sasse (UCL). This security & cryptographic engineering focus sets this course apart from related courses.

The taught part of the course runs for 20 hours over 10 weeks, split in 10 topics:

Most importantly the course includes 10 hours of labs (20 next year!), split into 5 exercises, that give students (and their teachers!) hands on experience implementing extremely advanced privacy enhancing technologies. More generally the course provides an introduction to solid cryptographic engineering, test-driven development, testing & QA tools and code audits. The programming language used was Python on a Linux environment, with the petlib library that was specially developed for this course.

For each lab exercise students in pairs were provided with a partial code file, and a set of unit tests, and were asked to fill in the remaining code to fulfill the task, and at least make the unit tests pass. The topics of the exercises track the first 5 lecture topics:

Finally, part of the grading was based on students performing a code review of other groups, looking for code defects leading to security or other bugs.

Overall, I am very proud of the progress everyone made. The course was attended by 16 MSc student and 2 MEng students. Everyone eventually was able to complete all lab assignment — not a given considering the advanced nature of the tasks at hand. It was evident while discussing with student the final exercise, on building a selective disclosure credential, that many had developed an intuitive understanding of how to build solutions based on zero-knowledge protocols, and all had definitely overcome their initial fear of these more advanced concepts in PETs.

I was also very impressed with many students that were able to tackle the hardest questions in the exam. One of those questions, basically asked students to re-invent a variant of the privacy preserving genomic testing protocol we presented at WPES 2014 — and many did successfully. Similarly, they were asked to de-anonymize a mechanism very similar to the 15:15 rule in place in California to “protect” smart meter reading, and again many did so successfully under time constrain and the high pressure environment of exams. As ever, the great engagement from students was the most rewarding part of teaching the course.

All material is available online (see links to slides, and git repositories), and I would be delighted to share / receive any additional exercises by others finding this material relevant to their courses.

I will be participating on a panel this afternoon on “Creating Usable and Secure Software”, in the context of the conference on Digital Citizenship and Surveillance Society. I share a platform with a number of illustrious people — Dave Hrycyszyn, Lola Oyelalo and Blaine Cook — with a much deeper experience in usable software and services development. However, I will attempt to provide some context, and my opinions, on why we can observe a broadly poor state of affairs when it comes to usability of privacy technologies — and hopefully open a discussion on how to overcome roadblocks.

My main two positions will be as follows:

  • The political context within which technical security and privacy research and development had to be conducted over the past 40 years greatly contributed to the lack of wide deployment and poor usability of privacy technologies.
  • The lack of “knowledge” about methods for developing usable privacy friendly solutions only offer a partial explanation for this poor state of affairs, and has to compete with other roadblocks that systemically undermined the deployment of usable privacy technologies.

First, it is worth reminding ourselves that research into security technologies and strong cryptography specifically, was until recently the prerogative of governments. Public discussion and know-how on this topic was developed seriously after the mid-1980s, and often despite serious pressure from the US and other governments. The technical security community is small and there remain serious technical challenges to providing privacy friendly solutions — solutions that require deep expertise developed over years of practice (requiring funding).

Second, the export control regimes, and also requirements for cooperation with law-enforcement slowed down significantly the blanket deployment of privacy technologies even after the strict export control regime of the 1990s was lifted. What makes a number of privacy technologies unusable — email encryption, instant messaging encryption — is the fact that common clients do not support them transparently and by default — requiring plug-ins, user configuration and manual key management. Thus the lasting impact of these regulation has not been the non-proliferation of strong crypto technologies, but the lack of integration of these into mainstream platforms. It is telling that the current Law Enforcement and Government narrative is not about preventing encryption know-how from spreading, but rather discouraging wide deployment of such technologies without the ability for back-door or front-door access.

Third, there are commercial pressures — which again have been related with government hostility of the wide deployment of privacy technologies. It is easy to forget that governments, are major customers of technologies. Thus they are able to dictate requirements that make it difficult to widely deploy privacy technologies. It is telling that mainstream mail clients — such as Microsoft Outlook — do not transparently support PGP based end-to-end encryption and have instead opted for S/MIME and models that make the use of encryption by individuals rather difficult. In this context one may assume that the key customers of this software — large enterprises and governments — simply never asked for such features, and in fact probably considered such a feature to conflict with other requirements (such as the need to recover mail of employees, backup, …).

These commercial pressures, have changed in the past few years, as large internet companies start relying heavily on serving end-users (search, webmail, social networking). Sadly, these companies have adopted both a business model — ad-based monetization — and a technical architecture — cloud computing — that makes meaningful privacy protection very difficult. In turn the “success” of those architectures has lead to an extreme ease of developing using this model, and an increasing difficulty in providing end-user solutions with appropriate privacy protections — let alone usable ones.

The rise of services has pushed a number of key privacy technologies into not being commercially supported and a key feature, and in effect at best a “common” — with the governance and funding problems this entails. We have recently learned about the systemic under funding of key privacy technologies such as OpenSSL and GPG. Technologies like Tor are mostly funded for their national firewall traversal features, seeing development on anonymity features suffer. Unlike other commons (health, parks, quality assurance in medicines), the state has not stepped in to either help with governance or with funding — all the opposite. For example, standardization efforts have systematically promoted “surveillance by design” instead of best of breed privacy protection; funding for surveillance technology is enormous compared to funding for privacy technologies, and somehow ironically, a number of calls for funding of privacy technologies are in the context of making surveillance more “privacy friendly” — leading to largely non-nonsensical outcomes.

So, lack of “knowledge” about how to develop usable software, while also a contributing factor, has to be seen within the context of the above structural pressures. In parallel, pressures undoubtedly exist when it comes to the discipline of UX which is in itself recent, and constantly involving. Along with serious funding for collaboration on building more usable privacy software (which the Simply Secure project that I am associated with attempts to provide), we need a strategy to counter those systemic pressures to ensure the wide deployment of usable privacy technologies.

Last week I had the opportunity to attend a joint US National Academy of Sciences and UK Royal Society event on cyber-security in Washington DC. One of the speakers, a true expert that I respect very much, described how they envision building (more) secure systems, and others in the audience provided their opinion (Chatham House Rule prevents me from disclosing names). The debate was of high quality, however it did strike me that it remained at the level of expert opinion. After 40 years of research in cyber-security, should we not be doing better than expert opinion when it comes to understanding how to engineer secure systems?

First, let me say that I have a great appreciation for craftsmanship and the deep insights that come from years of practice. Therefore when someone with experience tells me to follow a certain course of action to engineer a systems, in the absence of any other evidence, I do listen carefully. However, expert opinion is only one, and in some respects the weakest form of evidence in what researchers in other disciplines have defined as a hierarchy of evidence. Stronger forms of evidence include case studies, case-control and cohort studies, double-blind studies with good sample sizes and significant results, and systematic meta-analyses and reviews.

In security engineering we have quite a few case reports, particularly relating to specific failures, in the form of design flaws and implementation bugs. We also have a set of methodologies as well as techniques and tools that are meant to help with security engineering. Which work, and at what cost? How do they compare with each other? What are the non-security risks (cost, complexity, training, planning) associated with them? There is remarkably little evidence, besides at best expert opinion, at worse flaming, to decide. This is particularly surprising, since a number of very skilled people have spent considerable time advocating for their favorite engineering paradigms in the name of security: static analysis, penetration testing, code reviews, strong typing, security testing, secure design and implementation methodologies, verification, pair-coding, use of specific frameworks, etc. However, besides opinion it is hard to find much evidence of how well these work in reducing security problems.

I performed a quick literature survey, which I add here for my own future benefit:

Read the rest of this entry »

It takes quite a bit of institutional commitment and vision to build a strong computer security group. For this reason I am delighted to share here that UCL computer science has in 2014 hired three amazing new faculty members into the Information Security group, bringing the total to nine. Here is the line-up of the UCL Information Security group and teaching the MSc in Information Security:

  • Prof. M. Angela Sasse is the head of the Information Security Group and a world expert on usable security and privacy. Her research touches upon the intersection of security mechanisms or security policies and humans — mental models they have, the mistakes they make, and their accurate or false perceptions that lead to security systems working or failing.
  • Dr Jens Groth is a cryptographer renowned for his work on novel zero-knowledge proof systems (affectionately known as Groth-Sahai), robust mix systems for anonymous communications and electronic voting and succinct proofs of knowledge. These are crucial building blocks of modern privacy-friendly authentication and private computation protocols.
  • Dr Nicolas Courtois is a symmetric key cryptographer, known for pioneering work on algebraic cryptanalysis, extraordinary hacker of real-world cryptographic embedded systems, who has recently developed a keen interest in digital distributed currencies such as bitcoin.
  • Prof. David Pym is both an expert on logic and verification, and also applies methods from economics to understand complex security systems and the decision making in organizations that deploy them. He uses stochastic processes, modelling and utility theory to understand the macro-economics of information security.
  • Dr Emiliano de Cristofaro researchers privacy and applied cryptography. He has worked on very fast secure set intersection protocols, that are key ingredients of privacy technologies, and is one of the leading experts on protocols for privacy friendly genomics.
  • Dr George Danezis (me) researches privacy technologies, anonymous communications, traffic analysis, peer-to-peer security and smart metering security. I have lately developed an interest in applying machine learning techniques to problems in security such as anomaly detection and malware analysis.
  • Dr Steven Murdoch (new!) is an world expert on anonymous communications, through his association with the Tor project, banking security and designers of fielded banking authentication mechanisms. He is a media darling when it comes to explaining the problems of real-world deployed cryptographic systems in banking.
  • Dr Gianluca Stringhini (new!) is rising star in network security, with a focus on the technical aspects of cyber-crime and cyber-criminal operations. He studies honest and malicious uses of major online services, such as social networks, email services and blogs, and develops techniques to detect and suppress malicious behavior.
  • Dr Sarah Meiklejohn (new!) has a amazing dual expertise in theoretical cryptography on the one hand, and digital currencies and security measurements on the other. She has developed techniques to trace stolen bitcoins, built cryptographic compilers, and contributed to fundamental advances in cryptography such as malleable proof systems.

One key difficulty when building a security group is balancing cohesion, to achieve critical mass, with diversity to cover a broad range of areas and ensuring wide expertise to benefit our students and research. I updated an interactive graph illustrating the structure of collaborations amongst the members of the Information Security Group, as well as their joint collaborators and publication venues. It is clear that all nine faculty members both share enough interest, and are complementary enough, to support each other.

Besides the nine fullt-time faculty members with a core focus on security, a number of other excellent colleagues at UCL have a track record of contributions in security, supporting teaching and research. Here is just a handful:

  • Prof. Brad Karp is an expert in networking and systems and has made seminal contributions to automatic worm detections and containment.
  • Dr David Clark specializes in software engineering with a core interest in information flow techniques for confidentiality, software security and lately malware.
  • Dr Earl Barr researches software engineering, and has researched security bugs, and malware as well as ideas for simple key management.
  • Prof. Ingemar Cox (part-time at UCL) is a world expert in multimedia security, watermarking and information hiding.
  • Prof. Yvo Desmedt (part-time at UCL) is a renowned cryptographer with key contributions in group key exchange, zero-knowledge and all fields of symmetric and asymmetric cryptography.

The full list of other colleagues working in security, including visiting researchers, post-doctoral researchers and research students list many more people — making UCL one of the largest research group in Information Security in Europe.

I am today attending the first Internet Privacy Engineering Network (IPEN), where the issue of translating Data Protection principles into requirements has been raised a number of times. While this exercise needs to be repeated for each given service or application, it reminded me that I had drafted a number of generic Technical Requirements for Processing PII. These need to be reviewed and validated, but I hope they offer at least proof that the problem can be made tractable.

The paper “I Know Why You Went to the Clinic: Risks and Realization of HTTPS Traffic Analysis” is revisiting the vulnerability of HTTPS encryption to traffic analysis.

The authors note that most secure web connections access directly a specific website that is requested by a user (as opposed to the setting where a user is observed using Tor, where the site is not known). There are challenges in doing this: different websites are usually quite distinct, while many sub-pages in the same website may share a common layout that makes them hard to distinguish through encryption. Additional complications are added by cache, cookies, and the use of Flash. However, this setting may be easier than the Tor case, in that the adversary could realistically enumerate all pages on a website.

The paper chose 10 sites, including health, finance, video rental, and civil liberties. Traces are collected from encrypted requests and replies, and some simple pre-processing is done. In particular, the traffic volumes are collated in bursts and only their sizes are stored. This is done to add robustness to jitter in the traffic. Traces are then clustered using k-means and Gaussians are used to model each of the clusters. The use of the Gaussian distribution provides some further resilience to noise in the size of resources.

A machine learning approach is then followed to select which features distinguish pages: a logistic regression model is used to learn weights that best separate pages. A Hidden Markov Model is then used to track full traces of used navigation through an encrypted web-site.

To evaluate the approach traces were collected using virtual machines. This allows for the full state of a VM to be frozen, to reset the machine state perfectly between trace collection. Traces were collected in sessions where user state was meant to be preserved, and otherwise a fresh VM could be spun to clear the state. This way different conditions of cache on/off, cookies and browsing different sites were used for the collection. Interestingly the effect of cache being on systematically reduced the accuracy of the attack by 10%-20%.

Finally, the paper presents a defense against traffic analysis. It is based on padding the bursts of traffic to some threshold in order to confuse the feature extraction stage. Since many thresholds are possible, and the maximum is chosen, subject to a size “inflation” budget. Other defenses were also tried and compared, and there is a substantial reduction in the accuracy of this attack.

The paper “A Predictive Differentially-Private Mechanism for Mobility Traces” looks at sanitizing mobility traces within the paradigm of differential privacy.

The idea is that a user wishes to use a location-based service. However, a user also wishes to maintain some privacy, so they will somewhat distort the reported location to hide their exact location. Of course, there is a clear trade-off between the utility, namely the accuracy of the reported location, and the privacy that can be maintained. In the case of this work the utility is related with the accuracy of the reported location compared with the real location.

At the heart of the system lies a definition of privacy based on “geo-indistinguishability”. The insight is that the locations “close-by” need to be indistinguishable, while locations very far apart may be distinguished. This offers a higher degree of information leakage than traditional differential privacy, but stands a chance to offer some utility for a single user trace.

A previously proposed mechanism offers such privacy, through first perturbing the center point of the user location (using a 2D generalization of the Laplacian distribution) and the requesting information about a larger mechanism. Its privacy degrades in a predictable manner when multiple observations are seen by the adversary. However, the authors note that real-world traces are very correlated with each other. For example a user stays in a cafe for a while, or they follow a certain path on a motorway. This insight may be used to reduce the noise introduced by the mechanism.

First a straw man mechanism is proposed: a prediction function is defined that tries to predict the next position to report from the previous published position. If the prediction yields a “good enough” location, subject to some threshold, the old prediction is used. If not a new prediction is used, at the cost to the privacy budget used. However, this simple scheme breaks the definition of geo-indistinguishability.

The strawman mechanism can be strengthened by doing a private test for the accuracy of the prediction, which in itself consumes some amount of the privacy budget. This results in information not leaking from the private prediction test on the data, and yields a geo-indistringuishable mechanism.

To make the mechanism more easy to integrate into location-based services a privacy budget manager is also devised. The manager is provided with a certain target privacy level and utility, and will configure the parameters of the mechanism to offer good utility subject to the constraints.

The evaluation was gone on the GeoLife traces from MSR, that were processed to simulate queries to an LBS, through sub-sampling. 

Interestingly, a plug-in for Firefox and Chrome is available that implements the approach.

The paper “Quantifying the Effect of Co-location Information on Location Privacy” presented by Alexandra-Mihaela Olteanu discusses the important problem of co-location on location privacy in mobile networks.

The work notes that co-location information is widespread: for example users tag a number of people on pictures in social networks; face recognition is getting better; and devices record the devices around them. Interestingly, the information an adversary may infer from this co-location information is not hidden by traditional location privacy mechanisms. In fact this is an instance of privacy technologies not doing a very good job at hiding information leaking from interactions between users. When correlations between user activity is observable by an adversary, the adversary may combine the information from both to increase the accuracy of their inferences. In fact the actions of one user may have serious consequences on the privacy of another (a co-target).

This work attempts to quantify the degree of privacy lost from such combined inferences. It uses the location privacy model by Shokri et al (S&P). Users locations and traces are modeled using a Markov model and the adversary observes co-locations. Tracking a single user is the traditional inferences of the hidden Markov model. With co-locations the attack is more complex since the constraints implied by co-locations need to be honored. This increases the information of the adversary, but also increases the computational complexity of the attack. As a result the authors also propose a heuristic attack, that ignores some of the information / constraints but tries to keep a certain set of observations consistent.

The evaluation was performed on the GeoLife (MSR Asia) dataset. Interestingly co-location information can be inferred from such raw traces, to simulate a number of scenarios. For example one may vary the fraction of co-location events observed by the adversary or the location privacy mechanism used. The result show, unsurprisingly, that the adversary observing co-locations has a significant impact on privacy. They also show that observing a target with low privacy settings may lead to the compromise of a co-target with high privacy settings.

This is a very nice work. As someone in the audience pointed out, some of the computational complexities may be tackled through using sampling based estimations of the posterior distribution that is otherwise intractable. This is probably a worthy space for follow-up research. Relaxing the Markov mobility assumption (which makes the posterior computations even more complex) is another avenue for future work.

(The first session of PETS2014 deal with privacy for mobile devices. )

The paper “Exploiting Delay Patterns for User IPs Identification in Cellular Networks” observes that a significant number of mobile operators (over 50% of the market in North America and over 30% in Europe) assign public and routable IP addresses to mobile devices. That opens the devices to denial of service attacks, resource depletion, etc. But how can an adversary find the IP address of a specific user? If the adversary can observe a large fraction of the network this may be easy. The paper shows that even a small entity — like a single user — may track a specific user’s IP.

The tracking attack uses specificities of how mobile devices use the cell network. It turns out that the network stack may be in different states: “Idle”, “cell dispatch” or “cell fetch”, to conserve battery. Most of the time a device is in “Idle”. The authors notice that the push notification of IM services allows for indirectly injecting delay patterns on a target device. To receive the notification the device goes to a high everge reception / transmission state and for a time window (of 10-15 sec) the latency to reach the device drops. An adversary may use this insight to identify the IP of a device associated with an IM identifier.

So the attack methodology goes as follows: the attacker send an IM message, lowering the latency of the target device. Then they scan a range of devices by IP to detect if the latency is compatible with the hypothesis it has received a push notification. If not, the IP may be excluded. The attack is repeated a number of times to gain increasing certainty about the exact address of the target. Graphs suggest that about 10-20 iterations the space of possible device IPs becomes significantly smaller. However, for best result the repeated probes should be spread quite a few minutes apart. However, there is still some confusion between the target device and devices that always transmit. Eliminating those further improves the accuracy of the attack.

Simple countermeasures involve either firewalling devices to make it difficult to reach them, or use IPv6 making the space of possible addresses to scan unfeasibly large.

 

Follow

Get every new post delivered to your Inbox.