Sane security assumptions

25 March 2008

In security engineering it is a known fact that few practical systems can claim unconditional security. Security is always reducible to a set of assumptions that are believed to hold.

In traditional cryptography these assumptions are quite clear: keys have to be kept safe; block ciphers and hash functions should be indistinguishable from random permutations and random functions; and public key primitives rely on the hardness of a small set of mathematical problems.

Traditional security engineering is encouraging us to think along similar lines: any secure system has to have a well defined Trusted Computing Base (TCB) that functions according to the specification, and ensure that the security properties of the larger system overall hold. It is the role of the TCB to make sure access control, key storage, memory protection, and privileges, to only mention a few hold in commodity operating systems. In operating systems with mandatory access control the TCB is in charge of imposing the security policy, on sometimes unwilling users. Often the correctness of the TCB cannot be shown mathematically and sound engineering practices have to be employed to implement it: these include code reviews, safe languages, verification and most importantly simplicity and few lines of code.

Hidden under the mathematical assumption or the secure TCB assumption, are people. People will hold access to keys, and coalitions of them are going to have access and permissions to modify the TCB. Shamir’s secret sharing scheme, despite being mathematically unconditionally secure, still relies on some fraction of users holding shares to abide by a security policy and not divulge their shares. For a long time people were considered outside the system, and now security engineering starts considering them explicitly.

So far so good: mathematics, trusted computing bases, or quorums of people are a same foundation on which one can build secure systems.

Yet a worrying trend comes from some corners of systems research: researchers extend or build over complex underlying system without fully understanding their security assumptions — and assume that the final system will be secure. A typical example is any security paper that makes use of a secure Distributed Hash Table (DHT). Some paper discuss how one could build such a beast, either making assumption about admission control and detection of mis-routing, or trying to use social structure to alleviate routing problems and the sybil attack. Those `secure’ DHTs are fragile, make multiple assumptions about their operational environment and impose restrictions and overheads on how the DHT can be used. It is not acceptable to use them as black box replacements for insecure DHTs thinking that their security properties will magically make the overall design secure.

Even worse than not understanding the composition of security properties or assumptions it to make assumptions that are demonstrably false in the real world for the sake of providing a security argument. To use again an example of DHTs, it has become a meme in the systems research community that it is acceptable to do addmission and rate control to limit the Sybil attack, based on IP addresses. The assumption is that IP addresses are a scarse resource, and that an adversary will find it hard to be able to route streams from multiple IP addresses. This is simply not the case: powerful adversaries control large national firewalls / routers that they can use to pretend they are any machine they wish, and pathetic adversaries can buy bot-net time on thousands of machines for little money. How have we allowed this bogus `security assumption’ to appear in so many publications?

Another meme is that selecting a quorum of nodes from an undefined population will increase security. This is an exterme form of the `kindness of strangers’ assumption and comes down to assuming that given enough people a proportion of them will be good. This might be true (even though sociologists will have to work hard to prove it) if the population is guaranteed to be drawn at random, or representative at least of the user base. But if a system can be joined by adversaries perfoming a Sybil attack, it is not clear there is any basis for trusting 1000 nodes more than 10, or think that within those 1000 nodes only 30% would be corrupt. This is a parody of the quorum of people assumption in Shamir’s secret sharing sheme: in secret sharing you know who the people are, and assume that a quotum of them will be honest — given that you know their identity, and have made a trust decision upon it!

Advertisements

One Response to “Sane security assumptions”

  1. really large attacks will invoke really large responses. hence a DDos attack from a very big source will get the source disconnected (mitgates against sybil attacks for load from one site) and a large attack from a large botnet on a single target will result in the trget gettign webhosted by someone who can take it, or eventaully, by some of the botnet getting shut down or both …

    real humans are not in the loop in the timeframe of good v. bad behaviour of computers except in terms of choosing what OS or virus protenction to (re-) install so i dont think socialogy is relevant on this timescale. on the transaction time scale, you need to think in terms of diversity of software systems avaialble to think about how many machines might be subverted to a single cause, not on how many humans might be pursuaded to permit on a per target basis, a good or bad behaviour.

    the spatial and temportal stuff matters…

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: