Sunday, November 23, 2014

Is this the right server?

It's nice to see someone else reach the following conclusion:

"For those familiar with SSH, you should realize that public key pinning is nearly identical to SSH's StrictHostKeyChecking option. SSH had it right the entire time, and the rest of the world is beginning to realize the virtues of directly identifying a host or service by its public key."

Verifying a TLS certificate via the CA hierarchy is better than nothing, but it's not really all that reassuring. Approximately, what it tells you is that there is a chain of certification leading back to one or more root authorities, which for some reason we all try not to think about too much are granted the ultimate authority on the legitimacy web sites. I say "approximately", because fancier TLS verifiers can and do incorporate additional information.

The root authorities are too numerous to really have faith in, and they have been compromised in the past. In general, they and their delegates have little incentive to be careful about what they are certifying, because the entities they certify are also their source of income.

You can get better reliability in key verification if you use information that is based on the interactions of the actual participants, rather than on any form of third-party security databases. Let me describe three examples of that.


Pin the key

For many applications, a remotely installed application needs only communicate with a handful of servers back at a central site you control. In such a case, it works well to pin the public keys of those servers.

The page advocates embedding the public key directly in the application. This is an extremely reliable way of obtaining the correct key. You can embed the key in the app's binary as part of your build system, and then ship the whole bundle over the web, the app store, or however else you are transmitting it to the platform it will run on. Given such a high level of reliability, there is little benefit from pulling in the CA hierarchy.

As linked above, you can implement pinning today. It appears to be tricky manual work, though, rather than something that is built into the framework. As well, you don't get to ignore the CA hierarchy by doing this sort of thing. So long as you use standard SSL libraries, you still have to make sure that your key validates in the standard ways required by SSL.


Associate keys with links

The Y property deserves wider recognition, given how important hyperlinks are in today's world. Put simply, if someone gives you a hyperlink, and you follow that hyperlink, you want to reliably arrive at the same destination that the sender wanted you to get to. That is not what today's URLs give you.

The key to achieving this property is that whenever you transmit a URL, you also transmit a hash of the expected host key. There are many ways to do this, including the ones described at the above hyperlink (assuming you see the same site I am looking at as I write this!). Just to give a very simple example, it could be as simple as using URLs of the following form:

     https://hash-ABC123.foo.bar/sub/dir/foo.html

This particular example is interesting for being backward compatible with software that doesn't know what the hash means.

I don't fully know why this problem is left languishing. Part of it is probably that people are resting too easy on the bad assumption that the CA hierarchy has us covered. There's a funny mental bias where if we know nothing about a subject, and we see smart people working on it, the more optimistic of us just assume that it works well. Another part of the answer is that the core protocols of the world-wide web are implemented in many disparate code bases; SSH benefits from having an authoritative version of both the client and the server, especially in its early days.

As things stand, you can implement "YURLs" for your own software, but they won't work as desired in standard web browsers. Even with custom software, they will only work among organizations that use the same YURL scheme. This approach looks workable to me, but it requires growing the protocols and adopting them in the major browsers.


Repeat visits

One last source of useful information is the user's own previous interactions with a given site. Whenever you visit a site, it's worth caching the key for future reference. If you visit the "same" site again but the key has changed, then you should be extremely suspicious. Either the previous site was wrong, or the new one is. You don't know which one is which, but you know something is wrong.

Think how nice it would be if you try to log into your bank account, and the browser said, "This is a site you've never seen before. Proceed?"

You can get that already if you use pet names, which have been implemented as an experimental browser extension. It would be great if web browsers incorporated functionality like this, for example turning the URL bar and browser frame yellow if they see a a site is a new one based on its certificate. Each browser can add this sort of functionality independently, as an issue of quality of implementation.

In your own software, you can implement key memory using the same techniques as for key pinning, as described above.


Key rotation

Any real cryptography system needs to deal with key revocation and with upgrading to new keys. I have intentionally left out such discussion to keep the discussion simple, but I do believe these things can be worked into the above systems. It's important to have a way to sign an official certificate upgrade, so that browsers can correlate new certificates with old ones during a graceful phase-in period. It's also important to have some kind of channel for revoking a certificate, in the case that one has been compromised.

For web applications and for mobile phone applications, you can implement key rotation by forcing the application to upgrade itself. Include the new keys in the newly upgraded version.

Thursday, November 20, 2014

FCC inches away from neutrality

The FCC’s latest proposal for network neutrality rules creates space for broadband carriers to offer “paid prioritization” services.[11] While the sale of such prioritization has been characterized as a stark and simple sorting into “fast” and “slow” traffic lanes,[12] the offering is somewhat more subtle: a paid prioritization service allows broadband carriers to charge content providers for priority when allocating the network’s shared resources, including the potentially scarce bandwidth over the last-mile connection between the Internet and an individual broadband subscriber. Such allocation has historically been determined by detached—or “neutral”—algorithms. The Commission’s newly proposed rules, however, would allow carriers to subject this allocation to a content provider’s ability and willingness to pay.

That's from a review on Standard Law Review a few months ago. I think this evolution in the FCC's approach will benefit the public.

It seems important to consider realistic developments of the Internet. Here's a thought experiment I've used for a long time, and that seems to be happening in practice. Try to imagine what goes wrong if a site like YouTube or Netflix pays--with its own money--to install some extra network infrastructure in your neighborhood, but only allows its own packets to go across that infrastructure. Doing so is a flagrant violation of network neutrality, because packets from one site will get to you faster than packets from another site. Yet, I can't see the harm. It seems like a helpful development, and just the sort of thing that might get squashed by an overly idealistic commitment to neutrality.

As a follow-on question, what changes if instead of Netflix building the infrastructure itself, it pays Comcast to do it? It's the same from a consumer's view as before, only now the companies in question are probably saving money. Thus, it's even better for the general public, yet it's an even more flagrant violation of network neutrality. In this scenario, Netflix is straight-up paying for better access.

It seems that the FCC now agrees with that general reasoning. They not only support content delivery networks in general, but now they are going to allow generic ISPs to provide their own prioritized access to sites that pay a higher price for it.

I believe "neutrality" is not the best precise goal to go for. Rather, it's better to think about a more general notion of anti-trust.