On the morals of network research and beyond

20 August 2015

This posts presents a quick opinion on a moral debate, that seems to have taken large proportions at this year’s SIGCOMM, the premier computer networking conference, related to the following paper:

Encore: Lightweight Measurement of Web Censorship with Cross-Origin Requests
by Sam Burnett (Georgia Tech) and Nick Feamster (Princeton).

The paper was accepted to be presented, along with a public review by John W. Byers (Boston) that summarizes very well the paper, and then presents an account of the program committee discussions, primarily focused on research ethics.

In a nutshell the paper proposes using unsuspecting users browsing a popular website as measuring relays to detect censorship. The website would send a page to the users’ browser — that may be in a censored jurisdiction — that actively probes potentially blocked content to establish whether it is blocked. Neat tricks to side-step and use cross domain restrictions and permissions may have other applications.

Most of the public review reflected an intense discussion on the program committee (according to insiders) about the ethical implications of fielding such a system (2/3 of the 1 side is devoted to this topic). The substantive worry is that, if such a system were to be deployed the probes may be intercepted and interpreted as willful attempt to bypass censorship, and lead to harm (in “a regime where due process for those seen as requesting censored content may not exist”). Apparently this worry nearly led to the paper being rejected. The review goes on to disavow this use case — on behalf of the reviewers — and even call such measurements unethical.

I find this rather lengthy, unprecedented and quite forceful statement a bit ironic, not to say somewhat short-sighted or even hypocritical. Here is why.

While a lot of space is devoted to establishing the morality of this hypothetical use case — and the responsibility of researchers — there seems to be little moral consideration of a number of prerequisites that allow and enable harm to occur. These are equally related to networking research:

  • The first networking topic, which is not value free, is the ability for sites to request resources across domains — with no, or little user control and consent. While this is useful for building APIs, this is also the primary mechanism by which mass scale user tracking is nowadays implemented on the web. The whole on-line advertising industry is built on this technology, that has in effect created the largest privacy invasion infrastructure in human history. This is not a hypothetical use case — it is already happening in your browser right now. Besides the orgy of meta-data collected by third-party advertisers — with little consent — Snowden document reveal how the same identifiers are used as selectors by the NSA. This is the mechanism that this research use — and there is no word about its use today; its morality or the morality of the engineers that have built it is a non-topic. No questions asked. Facebook and Google are Silver sponsors of SIGCOMM.
  • Second, there is a worry that observing a user accessing a blocked resource may lead to harm. This observation necessitates the existence of both network monitoring and blocking equipment — that would then indicate access has occurred. This very real equipment, its design, manufacture, but also the quality of the advice it provides on which decisions are taken is not value free. Its is currently deployed in a number of counties. Network engineers are involved in researching it, building it and maintaining it, but despite being a necessary component for harm to occur, there is no mention of those moral agents. Ironically the discussion centers instead around a research paper trying to uncover this phenomenon — rending those agents, and their moral responsibility, invisible. Huawei is a Gold sponsor of the event.
  • Third, there are the values embedded in the architecture of the internet itself and its (lack of) confidentiality architecture. A key enabler for harm to occur is that by design the internet transports information in clear, and leaks the meta-data of the communicating end-parties. This enables censorship and it also enables the surveillance necessary to put people in harm’s way if they access a resource. The same is true of protocols like GSM that were designed to be interception friendly. Neither, the internet nor GSM are natural phenomena: they were designed and built by engineers and researchers — many of them luminaries of SIGCOMM. There is no discussion in the public review about the responsibility of fielding such “standard” technologies in a context that may get people killed because of its lack of privacy. Cisco and Ericsson are Platinum and Gold sponsors.
  • Finally, the moral agents (presumably agents of the state) that may bring the actual harm upon the user are also invisible. There is no discussion in the public review about the fact that they would be acting on poor network forensics evidence — that is trivial to refute. Let alone that they are likely going to be acting without due process and outside the rule of the law.

It seems to me that a long chain of moral actors are involved and are or were required for the harm, presented as being the responsibility of the researchers alone, to materialize. By choosing to focus on the researchers, the other moral agents are made invisible — including those that massively profit by the architectures that enable this harm, both monetarily and politically, as well as those doing the actual harm.

There is a difference between a medical researcher administering a drug, that may — through a natural and amoral process — lead to harm, and the case being considered here. At least two other moral actors have to misinterpret information and act, in what I would consider immoral ways, for harm to occur in the setting of the networking research: the surveillance box manufacturers and the state representatives. The full architecture of the web and the internet enables it. I would argue that the bulk of responsibility — and the spotlight of moral outrage — should be on these actors. Placing it squarely on the researchers makes a mockery of the discussion of the ethical implications of our technological artefacts.

Discussion on ethics have become very popular in computer science lately — and to some extent I am glad about this. However, I think we should dispel three key fallacies.

The first one is that things we do not like (some may brand “immoral”) happen because others do not think of the moral implications of their actions. In fact it is entirely possible that they do and decide to act in a manner we do not like none-the-less. This could be out of conviction: those who built the surveillance equipment, that argue against strong encryption, and also those that do the torture and the killing (harm), may have entirely self-righteous ways of justifying their actions to themselves and others. Others, may simply be doing a good buck — and there are plenty of examples of this in the links above.

The second fallacy is that ethics, and research ethics more specifically, comes down to a “common sense” variant of “do no harm” — and that is that. In fact Ethics, as a philosophical discipline is extremely deep, and there are plenty of entirely legitimate ways to argue that doing harm is perfectly fine. If the authors of the paper were a bit more sophisticated in their philosophy they could, for example have made reference to the “doctrine of double effect” or the nature of free will of those that will bring actual harm to users, and therefore their moral responsibility. It seems that a key immoral aspect of this work was that the authors forgot to write that, confusing section.

Finally, we should dispel in conversations about research ethics, the myth that morality equals legality. The public review mentions “informed consent”, but in fact this is an extremely difficult notion — and legalistically it has been used to justify terrible things. The data protection variant of informed consent allows large internet companies, and telcos, to basically scoop most users’ data because of some small print in lengthy terms and conditions. In fact it should probably be our responsibility to highlight the immorality of this state of affairs, before writing public reviews about the immorality of a hypothetical censorship detection system.

Thus, I would argue, if one is to make an ethical point relating to the values and risks of technology they have to make it in the larger context of how technology is fielded and used, the politics around it, who has power, who makes the money, who does the torturing and the killing, and why. Technology lives within a big moral picture that a research community has a responsibility to comment on. Focusing moral attention on the microcosm of a specific hypothetical use case — just because it is the closest to our research community — misses the point, perpetuating silently a terrible state of moral affairs.

Advertisements

11 Responses to “On the morals of network research and beyond”

  1. marconicrowcroft said

    in defence of the sigcomm community, i would point out we’ve had repeated papers on how to do anonyimity and privacy, as well as in In defence of the Sigcomm community (&by extension the Internet), i would point out we’ve had repeated detailed analysis of privacy problems (including a very good paper this year on each of those) but you’re right, that some of the furore about this particular paper is somewhat mis-aligned

    my take on this (with 3 other PC members who absolutely objected to this paper being published) is technical, not ethical (although it is therefore ethical by implication:)

    first off, the claim in the paper that you can do large scale measurement of censorship, without the need for lots of social science first, is completely unsupported by their approach which
    a) requires you to guess what IP prefixes/ URLs will be blocked in the first place (e.g. is NYTimes blocked or just the Chinese language edition, or is a youtube account blocked or all of youtube – both actual examples which changed over time)
    b) assumes that countries/regimes running censorship for purely political/cultural reasons (i.e. not just legal reasons like child porn in the UK)
    won’t just block the script’s access to the gatech.edu (or just all of .edu) when they read this paper…..this renders the approach useless as its a single point of failure and easily detected, and most people running bluecoat filters (Syria, Pakistan) already log stuff and analyse it….see next point

    the paper betrays a complete lack of awareness of prior work on measuring censorship – e.g. work on what is done in syria (by UCL) and Pakistan )by Cambridge and Berkeley) showed how to do a large scale experiment and not risk anything – the scale is way way bigger than the paper under discussion, and was reported in IMC which heavily overlaps the Sigcomm community

    if they were really then to go back through the last 10 years they’d notice for example Richard Clayton’s work on measuring the effectiveness of the UK’s police/ISP cooperation in blocking child porn (to save citizens from themselves and from the law) – that work included some very ingenious experimental design – if they’d read it, they’d ask themselves “why is this being done this way” and realise that there are legal (as well as ethical) consequences of their approach, before they embarked on what I regard as such a naive design…

    but yes, there are a lot of other big problems with the internet’s basic design, but also a lot of Sigcomm people (despite sponsorship or perhaps sometimes because of it, also from Facebook and Google:) have been trying to fix that for a long time, so i don’t think its a criticism of the critique of this particular paper, that the internet has other flaws – the main points I’ve made above are that the approach in the paper doesn’t do what it says on the tin…

  2. Hi Jon,

    Regarding the first point, the paper doesn’t claim that large scale censorship measurement can be done independently of social science research. In fact, it makes the opposite point: researchers at Berkman, Citizen Lab, Princeton, and elsewhere have asked questions that require data that does not presently exist and we have no alternative methods of gathering at this point.

    Regarding point 1(b) about the script being blocked: The paper explains ways around this, including having the origin site proxy the script. (The same question was raised in question-and-answer at the talk yesterday; I believe you were there.)

    Regarding the points about related work: you mention country-specific studies. Those studies—conducted for limited periods of time in single countries (and, in one case, a single ISP)—are precisely the motivation for a tool like Encore. We know anecdotally that censorship varies across time and across countries; we also know from social scientists and anecdotes that it varies across regions within a country, and even within ISPs in the same country. Encore has collected measurements from more than 170 countries continuously, over the course of a year. The studies you mention are necessary and good, but they are complementary to Encore—they are neither widespread, nor continuous. The tools you mention above don’t even come close.

    I do find it disappointing, however, that after a day-long workshop on network ethics—which I helped organize, and which included both myself and the other Encore paper author—that somehow you couldn’t raise your concerns either with us personally or in the workshop itself. Or, at any time during the week of the conference. We would have been happy to address your concerns, many of which appear to be based on simple misunderstandings.

    -Nick

  3. Hi John,

    As I re-read the original comment above, another disturbing aspect occurred to me: Absolutely none of your concerns were represented in the paper reviews, a summary of PC discussion, or the public review. I believe this is because you were not one of the reviewers of the paper. Your comments (and suggestions that others shared your concerns) are utterly inconsistent with the six reviews that we received. (To their credit, all reviewers had deep, thoughtful, and generally positive commentary on the technical aspects of the work, which I’d be happy to share with anyone who would like to read them.) Based on this experience, I now wonder whether the other three program committee members you refer to in fact read or reviewed the paper, either; their concerns are not present in the reviews.

    We certainly need to have discussions on ethics—and, as George points out, we might want to have this discussion in the broader context of the values that are reflected by both designers and stakeholders. Perhaps as we consider ethical dissonance in the community, we should add to that list the “ghost reviewers” who have not reviewed or read papers but argue for their rejection in program committee meetings without conveying their opinions to authors in a written review. In this case, thankfully, the chairs appear to have done a good job curtailing the effects of this phenomena, but anyone who has served on a few program committees knows that ghost reviewers exist. It’s bad enough that good research can be rejected (censored?) based on the opinions of a few gatekeepers. It’s even worse when the reasons behind those decisions are not transparent to authors. Kudos to the program committee chairs for curtailing this kind of behavior.

    -Nick

    • marconicrowcroft said

      Nick,

      1. correct- i was not a reviewer.
      2. the intro to the presentation made the claim about general applicability – it is that which I am referring to – if it isn’t in the paper, maybe the talk could have made that clear.
      3. the blocking isn’t of WHERE the script runs – it is of the actual script to get around that, you’d not only have to host it lots of different places, but make it polymorphic.

      The reason I didn’t raise these points in the NetEthics workshop is that they aren’t ethics points, they are technical, and they are therefore not very relevant to that work. I did engeage (positively) with you at the Netethics workshop about the need for dialogue between researchers and their IRBs (and the PC and the IRBs) if you recall, so I don’t think you have any need to be disappointed:)

      • Hi Jon,

        Fair enough. It is completely true that Encore is not “general”, in the sense that there are certain things it cannot do (e.g., it basically gets only a bit of information, and the “how” is purely inference).

        I did quite appreciate your comments about the “penultimate” paper at the Netethics workshop, but I was surprised the other comments didn’t come up in our hallway/pub discussions, given your apparent strong feelings!

        Let’s talk more when you hopefully come visit Princeton. You’re up to some interesting related work and there are probably complementary ways to leverage the datasets. One positive outcome from the workshop/conference may be that people are less afraid to touch the dataset we do have—and, if it proves useful to the social scientists (and others), that can only help all of us explore the next steps.

        Thanks!
        -Nick

  4. Andrei said

    Focussing on just one discussion point:
    “unsuspecting users around the world must be enlisted to download oft-censored URLs without their informed consent. These requests could potentially result in se
    vere harm; for example, when the user lives in a regime where due process for those seen as requesting censored content may not exist”

    1) Threat model: “Censor at all costs” A regime is not targeting an individual, but merely seems who is requesting ccensored content and goes after them. So to build a robust proof that it was not the user, but Encore that requested the content, the authors should ensure that the fact that it was Encore and not the user should not be a) easily recognizable and b) provable. I suspect this is not that hard. Some kind of cookie with content signed by Encore?
    2) Focussing on the “when the user lives in a regime
    where due process for those seen as requesting censored
    content may not exist”. Threat model “It’s YOU!” You live in a regime which hates you and fabricates a case against you and puts you in jail. Should the Encore deisgners worry about the fact that it was actually because of their system that the guy went down for? Seeing what’s going on in some places, the answer for me is a firm no. They would have got him anyway; for something else.

    • marconicrowcroft said

      Andrei, your threat model is a useful addition to the ethical debate, but I think it needs refining – the point is type 1 threat is rational, whereas type 2 is byzantine, when in fact terror regimes can be a mix of rational and arbitrary. Thus really your two points are ends of a spectrum. This makes risk assessment (part of the IRBs job) very hard. But it is definitely useful to clarify that isn’t just good and bad countries/jurisdictions.

      A further refinement to worry about is whether the country actually has the capability to follow up the idea that an Encore type system is “innocent” and so the user who’s browser got detected trying to access forbidden sites was ok – firewalling sites is a lot less work than logging access attempts (obviously). If you look at the prior work on censorship with (illegally provided) bluecoat kit, we see that even though the kit can do both, there’s a mix of deployments.

      If we look at Russia and China, where most people aren’t actually arrested and tortured for looking at censored sites, and censorship itself is much more complex (I can point you at literature if you havn’t already read it), its more complex, because it depends on the users visibility in society how much it is a risk.

      By the way, you might want to worry about the legal consequences of this too, as contributing to someone’s risk (even if the primary cause is someone else) can be considered in a court to be …oh wait, I am not a lawyer nor do I etc etc

  5. meileaben said

    So because others are doing greater evils, it’s fine for researchers to potentially put unsuspecting end-users in harms way?
    I don’t buy that argument.

  6. marconicrowcroft said

    by the way, there’s extensive global mapping of censorship n RL and online and it would be quite easy to use that to construct a guess list to bootstrap Encore – if it wasn’t dangerous for some people in some places – see for example the Index on Censorship project which has been running for ages – e.g. https://mappingmediafreedom.org/

    • Jon,

      Good thought on bootstrapping. Also, see here:
      https://github.com/citizenlab/test-lists/tree/master/lists

      We’d love to figure out ways to use lists like this. We had started with a pruned version of the Herdict list (which I believe we’ve also publicly linked), but the kind folks at Oxford Internet Institute made us think twice about doing this, so we’re being very conservative right now about what to measure.

      If we can answer the ethical questions surrounding the tool and figure out ways to make it safe, adding these kinds of lists might be very valuable.

      My thought on this that someone should take the data we currently collect on a limited number of sites (YouTube, Twitter, Facebook) and show something interesting (e.g., blocking varies by region, ISP, and time within a country). Once we start demonstrating that there’s actual value to the data, hopefully there will be some desire to think about ways we can bootstrap more data. But, I think to push this discussion further, we need to demonstrate value… otherwise, the discussion will center around “any nonzero risk is bad, since the benefit is unclear”.

      -Nick

  7. […] two things of note here 1/ Heidi Howard won the Student Research Competition 2/ There was an interesting debate around netethics, which George Danezis, et al, blogged […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: