The previous post, pointing fingers at papers making unreasonable assumptions about Distributed Hash Tables, created a bit of controversy in the Cambridge Security group mailing lists. One of the most valid comments (made by Steven Murdoch over and Indian meal) was that no one knows how to secure systems against Sybil  attacks. A few years ago Chris Lesniewski-Laas, M. Frans Kaashoek, Ross Anderson and myself worked on using social links between peers to aleviate the sybil attack in DHTs. The proposed solution was cumbersome, but the idea was clearly worth pursuing.

 I am most glad to see that Chris Lesniewski-Laas has worked further on these ideas, and will be presenting a paper on the topic very soon:

A Sybil-proof one-hop DHT.
Chris Lesniewski-Laas. To appear. In Proceedings of the Workshop on Social Network Systems, Glasgow, Scotland, April 2008.

Abstract

“Decentralized systems, such as structured overlays, are subject to the Sybil attack, in which an adversary creates many false identities to increase its influence. This paper describes a one-hop distributed hash table which uses the social links between users to strongly resist the Sybil attack. The social network is assumed to be fast mixing, meaning that a random walk in the honest part of the network quickly approaches the uniform distribution. As in the related SybilLimit system, with a social network of n honest nodes and m honest edges, the protocol can tolerate up to o(n/log n) attack edges (social links from honest nodes to compromised nodes). The routing tables contain O(√m log m) entries per node and are constructed efficiently by a distributed protocol. This is the first sublinear solution to this problem. Preliminary simulation results are presented to demonstrate the approach’s effectiveness.”

In security engineering it is a known fact that few practical systems can claim unconditional security. Security is always reducible to a set of assumptions that are believed to hold.

In traditional cryptography these assumptions are quite clear: keys have to be kept safe; block ciphers and hash functions should be indistinguishable from random permutations and random functions; and public key primitives rely on the hardness of a small set of mathematical problems.

Traditional security engineering is encouraging us to think along similar lines: any secure system has to have a well defined Trusted Computing Base (TCB) that functions according to the specification, and ensure that the security properties of the larger system overall hold. It is the role of the TCB to make sure access control, key storage, memory protection, and privileges, to only mention a few hold in commodity operating systems. In operating systems with mandatory access control the TCB is in charge of imposing the security policy, on sometimes unwilling users. Often the correctness of the TCB cannot be shown mathematically and sound engineering practices have to be employed to implement it: these include code reviews, safe languages, verification and most importantly simplicity and few lines of code.

Hidden under the mathematical assumption or the secure TCB assumption, are people. People will hold access to keys, and coalitions of them are going to have access and permissions to modify the TCB. Shamir’s secret sharing scheme, despite being mathematically unconditionally secure, still relies on some fraction of users holding shares to abide by a security policy and not divulge their shares. For a long time people were considered outside the system, and now security engineering starts considering them explicitly.

So far so good: mathematics, trusted computing bases, or quorums of people are a same foundation on which one can build secure systems.

Yet a worrying trend comes from some corners of systems research: researchers extend or build over complex underlying system without fully understanding their security assumptions — and assume that the final system will be secure. A typical example is any security paper that makes use of a secure Distributed Hash Table (DHT). Some paper discuss how one could build such a beast, either making assumption about admission control and detection of mis-routing, or trying to use social structure to alleviate routing problems and the sybil attack. Those `secure’ DHTs are fragile, make multiple assumptions about their operational environment and impose restrictions and overheads on how the DHT can be used. It is not acceptable to use them as black box replacements for insecure DHTs thinking that their security properties will magically make the overall design secure.

Even worse than not understanding the composition of security properties or assumptions it to make assumptions that are demonstrably false in the real world for the sake of providing a security argument. To use again an example of DHTs, it has become a meme in the systems research community that it is acceptable to do addmission and rate control to limit the Sybil attack, based on IP addresses. The assumption is that IP addresses are a scarse resource, and that an adversary will find it hard to be able to route streams from multiple IP addresses. This is simply not the case: powerful adversaries control large national firewalls / routers that they can use to pretend they are any machine they wish, and pathetic adversaries can buy bot-net time on thousands of machines for little money. How have we allowed this bogus `security assumption’ to appear in so many publications?

Another meme is that selecting a quorum of nodes from an undefined population will increase security. This is an exterme form of the `kindness of strangers’ assumption and comes down to assuming that given enough people a proportion of them will be good. This might be true (even though sociologists will have to work hard to prove it) if the population is guaranteed to be drawn at random, or representative at least of the user base. But if a system can be joined by adversaries perfoming a Sybil attack, it is not clear there is any basis for trusting 1000 nodes more than 10, or think that within those 1000 nodes only 30% would be corrupt. This is a parody of the quorum of people assumption in Shamir’s secret sharing sheme: in secret sharing you know who the people are, and assume that a quotum of them will be honest — given that you know their identity, and have made a trust decision upon it!