What the CIA hack and leak teaches us about the bankruptcy of current “Cyber” doctrines

8 March 2017

Wikileaks just published a trove of documents resulting from a hack of the CIA Engineering Development Group, the part of the spying agency that is in charge of developing hacking tools. The documents seem genuine and catalog, among other things,  a number of exploits against widely deployed commodity devices and systems, including Android, iPhone, OS X and Windows. Also smart TVs. This hack, with appropriate background, teaches us a lesson or two about the direction of public policy related to “cyber” in the US and the UK.

Routine proliferation of weaponry and tactics

The CIA hack is in many ways extraordinary, in that it allowed the attackers to gain access to the source code of the hacking tools of the agency — an extraordinary act of proliferation of attack technologies. In other ways, it is mundane in that it is neither the first, nor probably the last hack or leak of catastrophic proportions to occur to a US/UK government department in charge of offensive cyber operations.

This list of leaks of government attack technologies, illustrates that when it comes to cyber-weaponry the risk of proliferation is not merely theoretical, but very real. In fact it seems to be happening all the time.

I find it particularly amusing – and those in charge of those agencies should probably find it embarrassing – that NSA and GCHQ go around presenting themselves as national technical authorities in assurance; they provide advice to others on how to not get hacked; they keep asserting that they can be trusted to operate extremely dangerous spying infrastructures; and handle in secret extremely dangerous zero-day exploits. Yet, they seem to be routinely hacked and have their secret documents leaked. Instead of chasing wisteblowers and journalists, policy makers should probably take note that there is not a high-enough level of assurance to secure cyber-weaponry, and for sure it is not to be found within those agencies.

In fact the risk of proliferation is at the very heart of cyber attack, and integral to it, even without hacking or leaking from inside government. Many of us quietly laughed at the bureaucratic nightmare discussed in the recent CIA leak, describing the difficulty of classifying the cyber attack techniques while at the same time deploying them on target system. As the press release summarizes:

To attack its targets, the CIA usually requires that its implants communicate with their control programs over the internet. If CIA implants, Command & Control and Listening Post software were classified, then CIA officers could be prosecuted or dismissed for violating rules that prohibit placing classified information onto the Internet. Consequently the CIA has secretly made most of its cyber spying/war code unclassified.

This illustrates very clearly a key dynamic in hacking: once a hacker uses an exploit against an adversary system, there is a very real risk the exploit is captured by monitoring and intrusion detection systems of the target, and then weponized to hack other computers, at a low cost. This is very well established and researched, and such “honey pot” infrastructures have been used in the academic and commercial community for some time to detect and study potentially new attacks. This is not the premise of sophisticated defenders, the explanation of how honeypots work is on wikipedia! The Flame malware, and Stuxnet before, were in fact found in the wild.

In that respect cyber-war is not like war at all. The weapons you use will be turned against you immediately, and your effective use of weapons relies on your very own infrastructures being utterly vulnerable to them.

What “Cyber” doctrine?

The constant leaks and hacks, leading to proliferation of exploits and hacking tools from the heart of government, as well through operations, should deeply inform policy makers when making choices about “cyber” doctrines. First, it is probably time to ditch the awkward term “Cyber”.

“Cyber” conflates two distinct policy directions, one on attack and the other on defense. On on side it encompasses offensive doctrine, operations, infrastructures and technologies, such as hacking, interception, introducing backdoors, weakening security standards, retaining data, and on the harsh end the blackmail, fraud, and social engineering needed to get access to keys and communications. On the other hand, it also covers assurance — or computer security as we known it for a while — namely techniques to ensure that computer systems operate correctly, are available, and maintain secrets according to policy. The conflation is dangerous, since to maintain capabilities for attack, there are incentives to perform actions that are detrimental to communal defense — the only effective defense we have against sophisticated hacking.

In the case of the CIA hack this tension is clear: exploits were not shared with vendors to ensure they are fixed in a timely manner; yet they were weaponized and deployed potentially exposing them to hostile parties; and ultimately they were leaked putting users of those devices — including high-risk users in the UK, US and beyond — at risk. It is not clear if the CIA or others have now disclosed those vulnerabilities to vendors, that is an interesting question for journalists to ask.

This is in sharp contrast with how the vulnerability disclosure process works when it comes to the assurance side of “Cyber”. Upon discovery, and before any risk of being exposed, professional security researchers are duty bound to report security issues to vendors, or the public. This ensures bugs are fixed in a timely manner, and no party has any advantage in using them for attack. This can be done through “responsible” disclosure, full disclosure, bug bounties, on contract as penetration testers, or as in-house security reviews. Those contribute to a broad high-assurance eco-system, with a clear understanding that disclosing bugs even in competitors products contributes to the security of all — such as google helping Microsoft fix bugs.

Not only government “Cyber” doctrine corrupts directly this practice, by hoarding security bugs and feeding an industry that does not contribute to collective computer security, but it also corrupts the process indirectly. It casts doubt on many avenues of reporting bugs and coordinating fixes, such as the national Computer Emergency and Response Teams (CERT) — that are meant to act as single contacts for responding to attacks. Since “Cyber” doctrines rely on the ability to attack, disclosing a new bug to a UK or US national or associated CERT may in fact see this bug used for attack, well before the fix is available.

Is it responsible to disclose a bug to CERT-UK? Will the bug go straight to GCHQ or the NSA to be used for attack? I do not know, but the conflation of attack and defense, under the UK “Cyber” umbrella is the reason for this doubt. I would always prefer to work with IT vendors directly for this reason.

A focus on computer security

So, it is time to go back to separating the two components of “Cyber”, and making a clear distinction between computer offense, and computer defense. Furthermore, the emphasis should always be on communal defense, as the only way to secure modern infrastructure, industry and life.

It is a fallacy to give too much credence to the agencies’ statements that offense is key to defense, including mass surveillance and capabilities for ubiquitous hacking. The main reason they support that doctrine is that their background it is attack, and thus this is what they have also used for defense so far. Yet defense is not going very well for them.

In the same way as civil engineers need to know how bridges fall, and investigators of crime know the modus operanti of criminals, good computer security experts know very well how hackers go about attacking systems. However, that does not mean that civil engineers rely on going around destroying bridges, or policemen go around committing burglaries to keep up with the craft. In fact the skills, infrastructures, engineering, and techniques needed for computer defense are quite different from those needed for mass attack.

Does this preclude the government agencies from hacking? Probably not: systems may always be misconfigured; patches may be applied late or never; physical access to machines usually allows installing backdoors. A policy that clearly states that vulnerabilities in commodity software will always go though a disclosure process, will increase overall assurance and trust in computer systems, but does not preclude targeted “equipment interference” operations. It simply makes bulk surveillance and bulk equipment interference very difficult — and raises the cost of targeted operations. Bulk computer attack is at the heart of what the NSA and GCHQ do. Policy makers have sadly not quite realized this yet, and the danger this poses to the security of all.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: