Meta-Ethical and Psychological Observations Relating to Speculation About the Scope of NSA Snooping

There are two alternate theories about whether what I will call the “tech companies” (Google, Apple, Facebook etc) participate or assist with NSA spying on Internet users.

Theory #1 is that they do not: While the companies respond to court orders, if the NSA is accessing user data without some sort of formal, particularized request that a tech company has to affirmatively respond do (i.e., not through some automated system), then it is doing so upstream, by tapping into ISPs or transit networks.

Theory #2 is that the tech companies are dissembling if not outright lying, and that the NSA has installed its own equipment somewhere “inside” their networks (whether logically or physically).

The purpose of this post is to not add to the speculation about which theory is more likely. I’ve been using Twitter for that, and as I write this new revelations are coming fast. I lean towards the dissembling theory: I think that there is enough wiggle room in the tech companies’ denials that they are not “lying,” but that they are being deceptive.

Motivated reasoning?

My first question however is why, exactly, do I lean in that direction? Obviously the evidence is not conclusive on either side. (Or wasn’t when I made my committment.) I think a motivating factor is: What happens if I’m wrong? If I’m wrong that the tech companies are “cooperating” in some manner with the NSA then I will look like I just had a healthily skeptical attitude. Maybe a bit paranoid but isn’t paranoia justified when dealing with the NSA? On the other hand, what if I believed the tech companies and was proven wrong on that–would I look gullible ?

Mind you I don’t think that, if indeed the tech companies are dissembling the people who took them at face value were suckers–not in the least. This situation is too complex for that, and the data just are not there. Tony Judt often quoted French communist Pierre Courtade, who said to a rival who ended up being correct in his analysis of the political situation, that “You and your kind were wrong to be right; we were right to be wrong.” Granted, Pierre Courtade was a Stalinist who was actually wrong to be wrong. But the point stands! A person is right or wrong in his analysis based on the evidence. A person who makes the right call based on flimsy evidence is just lucky, not prescient.

But it’s somehow different when you yourself are the person who might be considered gullible. I will go to great lengths to avoid looking foolish even when I would not judge a person doing the “foolish” thing I so avoid to be foolish herself. This is why cynicism is the psychologically safe position. It’s probably why so many people will go out of their way to publicly condemn a politician but never publicly support one. Etc.

Consequentialism versus deontology in news analysis

My second question stems from a brief Twitter exchange I had with Jerry Brito. Which world would I rather live in: One where the NSA has almost unfettered access to Internet communications, but only because it has a few strategic upstream taps? Or one where the NSA has access to the data of a few companies, because it has somehow installed gear “inside” their networks and perhaps with their knowledge, if not their cooperation? Jerry’s view is that the latter would be better because that would narrow the scope of the NSA’s reach. That is, it would certainly be bad if the NSA had access to Google and Facebook traffic but worse if they has access to all that traffic plus everything else, which an upstream tap with DPI could give them.

My initial view was that I would rather live in a world where the tech companies found a way to refuse to cooperate and where tapping was upstream, but Jerry’s point at first changed my mind. But as I think about it I’m back to my old position. I think this is because I’m not a consequentialist.

I think I would rather live in a world where “evil” was less widespread (the “upsteam” theory), but where there might be greater harms, than in a world where there was more widespread cooperation with evil, even if the harms were lesser.

I could attempt to justify this in terms of some kind of meta-consequentialism: Maybe my moral theory will get better results in the long run making it justifiable consequentially. But frankly, I have never had much attraction to utilitarianism of any kind (though I am attracted to pragmatism) and I’m not sure I can explain why rationally. Baby, I was born a deontologist.


I don’t have any conclusions but I do have an observation. In complicated issues with unclear evidence cognitive biases and mental shortcuts take center stage. It’s hard to think of any that are clearer than a rush to be cynical and skeptical to avoid embarrassment (or, contrariwise, a rush to defend a “good guy”). Similarly, a person’s moral impulses or meta-ethical framework might actually have policy implications, which is kind of weird.