On the morals of network research and beyond
20 August 2015
This posts presents a quick opinion on a moral debate, that seems to have taken large proportions at this year’s SIGCOMM, the premier computer networking conference, related to the following paper:
Encore: Lightweight Measurement of Web Censorship with Cross-Origin Requests
by Sam Burnett (Georgia Tech) and Nick Feamster (Princeton).
The paper was accepted to be presented, along with a public review by John W. Byers (Boston) that summarizes very well the paper, and then presents an account of the program committee discussions, primarily focused on research ethics.
In a nutshell the paper proposes using unsuspecting users browsing a popular website as measuring relays to detect censorship. The website would send a page to the users’ browser — that may be in a censored jurisdiction — that actively probes potentially blocked content to establish whether it is blocked. Neat tricks to side-step and use cross domain restrictions and permissions may have other applications.
Most of the public review reflected an intense discussion on the program committee (according to insiders) about the ethical implications of fielding such a system (2/3 of the 1 side is devoted to this topic). The substantive worry is that, if such a system were to be deployed the probes may be intercepted and interpreted as willful attempt to bypass censorship, and lead to harm (in “a regime where due process for those seen as requesting censored content may not exist”). Apparently this worry nearly led to the paper being rejected. The review goes on to disavow this use case — on behalf of the reviewers — and even call such measurements unethical.
I find this rather lengthy, unprecedented and quite forceful statement a bit ironic, not to say somewhat short-sighted or even hypocritical. Here is why.
While a lot of space is devoted to establishing the morality of this hypothetical use case — and the responsibility of researchers — there seems to be little moral consideration of a number of prerequisites that allow and enable harm to occur. These are equally related to networking research:
- The first networking topic, which is not value free, is the ability for sites to request resources across domains — with no, or little user control and consent. While this is useful for building APIs, this is also the primary mechanism by which mass scale user tracking is nowadays implemented on the web. The whole on-line advertising industry is built on this technology, that has in effect created the largest privacy invasion infrastructure in human history. This is not a hypothetical use case — it is already happening in your browser right now. Besides the orgy of meta-data collected by third-party advertisers — with little consent — Snowden document reveal how the same identifiers are used as selectors by the NSA. This is the mechanism that this research use — and there is no word about its use today; its morality or the morality of the engineers that have built it is a non-topic. No questions asked. Facebook and Google are Silver sponsors of SIGCOMM.
- Second, there is a worry that observing a user accessing a blocked resource may lead to harm. This observation necessitates the existence of both network monitoring and blocking equipment — that would then indicate access has occurred. This very real equipment, its design, manufacture, but also the quality of the advice it provides on which decisions are taken is not value free. Its is currently deployed in a number of counties. Network engineers are involved in researching it, building it and maintaining it, but despite being a necessary component for harm to occur, there is no mention of those moral agents. Ironically the discussion centers instead around a research paper trying to uncover this phenomenon — rending those agents, and their moral responsibility, invisible. Huawei is a Gold sponsor of the event.
- Third, there are the values embedded in the architecture of the internet itself and its (lack of) confidentiality architecture. A key enabler for harm to occur is that by design the internet transports information in clear, and leaks the meta-data of the communicating end-parties. This enables censorship and it also enables the surveillance necessary to put people in harm’s way if they access a resource. The same is true of protocols like GSM that were designed to be interception friendly. Neither, the internet nor GSM are natural phenomena: they were designed and built by engineers and researchers — many of them luminaries of SIGCOMM. There is no discussion in the public review about the responsibility of fielding such “standard” technologies in a context that may get people killed because of its lack of privacy. Cisco and Ericsson are Platinum and Gold sponsors.
- Finally, the moral agents (presumably agents of the state) that may bring the actual harm upon the user are also invisible. There is no discussion in the public review about the fact that they would be acting on poor network forensics evidence — that is trivial to refute. Let alone that they are likely going to be acting without due process and outside the rule of the law.
It seems to me that a long chain of moral actors are involved and are or were required for the harm, presented as being the responsibility of the researchers alone, to materialize. By choosing to focus on the researchers, the other moral agents are made invisible — including those that massively profit by the architectures that enable this harm, both monetarily and politically, as well as those doing the actual harm.
There is a difference between a medical researcher administering a drug, that may — through a natural and amoral process — lead to harm, and the case being considered here. At least two other moral actors have to misinterpret information and act, in what I would consider immoral ways, for harm to occur in the setting of the networking research: the surveillance box manufacturers and the state representatives. The full architecture of the web and the internet enables it. I would argue that the bulk of responsibility — and the spotlight of moral outrage — should be on these actors. Placing it squarely on the researchers makes a mockery of the discussion of the ethical implications of our technological artefacts.
Discussion on ethics have become very popular in computer science lately — and to some extent I am glad about this. However, I think we should dispel three key fallacies.
The first one is that things we do not like (some may brand “immoral”) happen because others do not think of the moral implications of their actions. In fact it is entirely possible that they do and decide to act in a manner we do not like none-the-less. This could be out of conviction: those who built the surveillance equipment, that argue against strong encryption, and also those that do the torture and the killing (harm), may have entirely self-righteous ways of justifying their actions to themselves and others. Others, may simply be doing a good buck — and there are plenty of examples of this in the links above.
The second fallacy is that ethics, and research ethics more specifically, comes down to a “common sense” variant of “do no harm” — and that is that. In fact Ethics, as a philosophical discipline is extremely deep, and there are plenty of entirely legitimate ways to argue that doing harm is perfectly fine. If the authors of the paper were a bit more sophisticated in their philosophy they could, for example have made reference to the “doctrine of double effect” or the nature of free will of those that will bring actual harm to users, and therefore their moral responsibility. It seems that a key immoral aspect of this work was that the authors forgot to write that, confusing section.
Finally, we should dispel in conversations about research ethics, the myth that morality equals legality. The public review mentions “informed consent”, but in fact this is an extremely difficult notion — and legalistically it has been used to justify terrible things. The data protection variant of informed consent allows large internet companies, and telcos, to basically scoop most users’ data because of some small print in lengthy terms and conditions. In fact it should probably be our responsibility to highlight the immorality of this state of affairs, before writing public reviews about the immorality of a hypothetical censorship detection system.
Thus, I would argue, if one is to make an ethical point relating to the values and risks of technology they have to make it in the larger context of how technology is fielded and used, the politics around it, who has power, who makes the money, who does the torturing and the killing, and why. Technology lives within a big moral picture that a research community has a responsibility to comment on. Focusing moral attention on the microcosm of a specific hypothetical use case — just because it is the closest to our research community — misses the point, perpetuating silently a terrible state of moral affairs.