Because everyone must have an opinion!

secure.jpg

First of all, let me just say that as a non-US citizen, I try not to comment in public on US politics. It's kind of hard, because it's a bit like trying not to comment on Roman politics in the first century AD, but there it is. Therefore, while the whole PRISM debacle is what prompted this post, what I have to say is not specific to PRISM.

Back in the day, three blogs ago and lost in the mists of Internet time, there was Total Information Awareness. This was a DARPA project from ten years ago, which ended up being defunded by Congress after a massive public outcry, not least about its totally creepy name. Basically the idea was to sift through all communications, or as many as was feasible, looking for patterns that indicated terrorist activity. The problem people had with TIA is much the same as the problem they have with PRISM: the idea that the government will look through everything, and then decide what is important.

On the one hand, this is actually a positive development. No, wait, let me finish! The old way of doing surveillance and Information Awareness that was less than Total was to let humans access all those communications. This method has problems with scale; even enthusiastic adopters of surveillance like East Germany and North Korea only succeeded to the degree they did because East Germany didn't have the internet and North Korea keeps it out. This guarantees abuse, if only because humans can't be told to ignore or forget information. The agent listening to the take from the microphones set up to catch subversive planning can't help also hearing intimate details of the subjects' lives that are not relevant to any investigation.

An automated system is preferable, then, since it doesn't "listen" the way a human does, and discards any data that do not match its patterns. Privacy is actually invaded less by an automated system than by human agents, given the same inputs.

That last clause is kind of important, though. Once you have the automated system set up, it is no longer constrained by scale. Governments around the world already have huge datacenters of their own, and could also take advantage of public cloud resources at a pinch, so such a system is guaranteed to expand, rapidly and endlessly, unless actively checked. A system that just looks for correlation clusters around terrorist organisations, subversive literature, and bulk purchases of fertiliser and ball-bearings will quickly be expanded to look for people behind on their student debt or parking tickets. Think this is an exaggerated slippery-slope argument? The US (sorry) Department of Education conducts SWAT raids over unpaid loans

As with all tools, the problem is the uses that the tool might be put to. Let's say you trust the current government absolutely not to do anything remotely shady, so you approve all these powers. Then next election, the Other Guys get in. What might they get up to? Are you sure you want them to have this sort of power available?

It is already very difficult to avoid interacting with the government or breaking any laws. Today, I briefly drove over twice the speed limit. Put like that it sounds terrible, doesn't it? But what actually happened was that the speed limit suddenly dropped by more than half. This is a very familiar route for me, I could see clear road ahead, and it was a lovely sunny day, so instead of jamming on my brakes right at the sign I exercised my judgment and braked more gradually. A human policeman, unless in a very bad mood, would have had nothing much to say about this. A black box in my car, though, might have revoked my license before I had finished braking (exceeding the speed limit by over 40 km/hour).

This is the zero-tolerance future of automated surveillance and enforcement. Laws and policies designed to be applied and enforced with judgment and common sense will show their weaknesses if they are to be applied by unthinking, unfeeling machines. I haven't even gone into the translation from fuzzy human language to deterministic computer language. The only solution would be to require lawmakers to submit a reference implementation of their law, which would have the advantage of allowing for debugging against test cases in silico instead of in the real world, with actual human beings. The ancillary benefit of massively slowing down the production of new laws is merely a fortunate side effect.

To recap: as usual, the weakest link is the human, not the machine. Systems like PRISM are probably inevitable and may even be desirable, but they need some VERY tight safeguards on them, which to date have not been in evidence. The problem is of course that discussing such systems in public risks disclosing information about how to evade them, but as we have seen in infosec, security by obscurity doesn't work nearly as well as full disclosure. If instead of feeling Big Brother watching over them, citizens felt that they and their government were working together to ensure common security, all of us would feel much happier about working to strengthen and improve such systems. Wouldn't you rather have guys like Bruce Schneier
inside the tent, as the saying goes?