Showing posts with label address crunch. Show all posts
Showing posts with label address crunch. Show all posts

Saturday, March 2, 2024

The case for IPv4.1

In 2003, Dan Bernstein wrote about the IPv6 mess, focusing primarily on the lack of viable migration plan. Avery Pennarun wrote again about the problem in 2011, also with a focus on the lack of viable migration plan. Now it's been another decade, for a total of around 30 years on this project.

With the benefit of 20/20 hindsight, we now know that the IPv6 plan was never a very good one. If IPv6 had been accurately estimated to take 30+ years of migration, it would not have achieved the buy-in that it did, and most likely the designers would have kept going. The individuals involved are all top of class from what I can tell, but sometimes good people produce bad results, especially in a group context. I'm reminded, in all of this, about the Ada programming language, with the unusual way it was designed. I'm even more reminded of a certain saying about committees: "none of us is dumb as all of us".

Big migrations do not always go this way. The World Wide Web only needed nine years, if we count from Tim Berners-Lee's memo to the launch of the Google search engine. Under the right conditions, the whole Internet can upgrade in less than ten years.

The difference in these projects is easy to understand. Avery describes it as follows:

In short, any IPv6 transition plan involves everyone having an IPv4 address, right up until everyone has an IPv6 address, at which point we can start dropping IPv4, which means IPv6 will start being useful. This is a classic chicken-and-egg problem, and it's unsolvable by brute force; it needs some kind of non-obvious insight. djb apparently hadn't seen any such insight by 2002, and I haven't seen much new since then.

Here in 2024, it's not really too late to start on a backup plan. It's anyone's guess whether a more modest IPv4.1 proposal would finish faster than IPv6, but we can all be sure of one thing. Our chances are better if we have two dogs in the race.

How to do it

Here's the general idea of IPv4.1, just to show what I mean. It's been posted about before, so consider this to be my explanation rather than anything fundamentally new.

The core change for IPv4.1 is that the IP packet header needs room for a larger address. The packet format can be the one from the SIPP proposal, but with version "7" instead of "6", since 6 is now taken. When an address is written down, for example in a router's configuration UI, these 8-byte addresses can be written down the same way we are used to, just with 8 digits instead of 4. Existing addresses can be zero-padded on the left, e.g. 0.0.0.0.127.0.0.1.

Whenever two machines talk to each other, they use the existing IPv4 packet format if they both have 4-byte addresses. If either of them has an 8-byte address, then they switch to the IPv4.1 format. In this case, though, the packets still mean the same thing they always did! Part of the key to achieving a large migration is not to saddle it with additional ones.

8-byte addresses should be sufficient for this problem. The reason that IPv6 addresses use 16 bytes is that IPv6 plans for the machines on people's private internal networks to avoid reusing an address from anyone else's private network. This property will not be pursued for IPv4.1, and it would not do anything useful it were, because per the previous paragraph, all the packets in IPv4.1 are supposed to mean the same thing they did in IPv4.

Rollout

During the initial rollout of IPv4.1, many machines will not yet be able to talk to each other with the extended addresses. This will be much like the case with IPv6 right now.

From there, it will be a race. It's speculation which race will go faster, but IPv4.1 has some big advantages. It doesn't take any configuration file changes, and it doesn't require designing IP blocks. It definitely doesn't require thinking about link-local addresses, updates to how DHCP works, or any other fundamental aspects of networking. All that it takes for IPv4.1 to win is for software providers to add it into their latest version, and for the major nodes on the Internet to upgrade their software some time over the next 5-10 years. Major Internet nodes usually have to upgrade their software once in a while, already, due to security and compliance requirements.

We don't have to know for sure that IPv4.1 will win the race before we can make a decision about trying. We can simply observe that the problem is important, and we can observe that our chances are better with two ways to win.

Thursday, January 31, 2013

The "magic moment" for IPv6

The Internet has undergone many large changes in the protocols it uses. A few examples are: the use of MIME email, the replacement of Gopher by HTTP, and the use of gzip compression within HTTP. In all three of these examples, the designers of the protocol upgrades were careful to provide a transition plan. In two out of the three examples (sorry, Gopher), the old protocol is still practical to use today, if you can live with its limitations.

Things are going differently for IPv6. In thinking about why, I like Dan Bernstein's description of a "magic moment" for IPv6. It goes like this:

The magic moment for IPv6 will be the moment when people can start relying on public IPv6 addresses as replacements for public IPv4 addresses. That's the moment when the Internet will no longer be threatened by the IPv4 address crunch.

Note that Dan focuses on the address crunch. Despite claims to the contrary, I believe most people are interested in IPv6 for its very large address space. While there are other cool things in IPv6, such as built-in encryption and simplified fragmentation, they are not enough that people would continue to lobby for IPv6 after all these years. The address crunch is where it's at.

While I like Dan's concept of a magic moment, I think the above quote asks for too much. There are some easier magic moments for individual kinds of nodes on the computer, and some might well happen before others. Let me focus on two particular kinds of Internet nodes: public web sites and home Internet users.

How close is the magic moment for web sites? Well, web servers can discard their IPv4 addresses just as soon as the bulk of the people connecting to them all have IPv6 connectivity. I do not know how to gather data on that, but as a spot point, I have good networking hardware but cannot personally connect to IPv6 sites. My reason is both mundane and common: I am behind a Linksys NATing router, and that router does not support IPv6. Even if it did, it does not support any sort of tunneling that would allow my local computer to connect to an IPv6-only web server. To the extent people are using plain old Linksys routers, we are a long way away from the magic moment for web servers.

How about for home users? Well, it's the other way around for home users: home users can switch once the majority of public web sites have an IPv6 address. This status is easier to gather data on. I just looked up the top ten web sites (according to Alexa's Top 500 Web Sites) and checked them with a publicly available IPv6 validation site (http://ipv6-test.com/validate.php). Of the top ten web sites, only four can be reached from an IPv6-only client: Google, Facebook, YouTube, and Wikipedia. The other six still require IPv4: Yahoo, Baidu, Live.com, Amazon, QQ.com, and Twitter. As things stand, we are also a long way from when home users can switch to IPv6-only.

Overall, this was a cursory analysis, but I think these "magic moments" are a helpful framework for thinking about the IPv6 changeover. Unfortunately, this framework currently indicates that we are nowhere close.

Saturday, December 29, 2012

Does IPv6 mean the end of NAT?

I frequently encounter a casual mention that, with the larger address space in IPv6, Net Address Translation (NAT)--a mainstay of wireless routers everywhere--will go away. I don't think so. There are numerous reasons to embrace path-based routing, and I believe the anti-NAT folks are myopically focusing on just one of them.

As background, what a NAT router does is allow multiplexing multiple private IP addresses behind a single, public IP address. From outside the subnet, it looks like the NAT router is a single machine. From inside the subnet, there are a number of machines, each with its own IP address. The NAT router allows communication between the inside and outside worlds by swizzling IP addresses and ports as connections go through the router. That's why it is a "net address translator" -- it translates between public IPs and private IPs.

My first encounter with NAT was to connect multiple machines to a residential ISP. It was either a cable company or a phone company; I forget which. The ISP in question wanted to charge extra for each device connected within the residential network. That is, if you connect two computers, you should pay more than if you connect one computer. I felt, and still feel, that this is a poor business arrangement. The ISP should concern itself with where I impose costs on it, which is via bandwidth. If I take a print server from one big box and move it onto its own smaller computer, then I need a new IP address, but that shouldn't matter at all to the ISP. By using NAT--in my case, Linux's "masquerading" support--the ISP doesn't even know.

This example broadens to a concern one could call privacy. What an organization does within its own network is its own business. Its communication with the outside world should be through pre-agreed protocols that, to the extent feasible, do not divulge decisions that are internal to the organization. It shouldn't matter to the general public whether each resident has their own machine, or whether they are sharing, or whether the residents have all bought iPads to augment their other devices.

For larger organizations, privacy leads to security. If you want to break into an organization's computer infrastructure, one of the first things you want to do is to feel out the topology of the network. Unless you use NAT at the boundary between your organization's network and the general internet, then you are exposing your internal network topology to the world. You are giving an attacker an unnecessary leg up.

You could also view these concerns from the point of view of modularity. The public network protocol of an organization is an interface. The internal decisions within the organization are an implementation. If you want everything to hook up reliably, then components should depend on interfaces, not implementations.

Given these concerns, I see no reason to expect NAT to go away, even given an Internet with a larger address space. It's just sensible network design. Moreover, I wish that the IETF would put more effort into direct support for NAT. In particular, the NAT of today is unnecessarily weak when it comes to computers behind different NATing routers making a direct connections with each other.

It is an understatement to say that not everyone agrees with me. Vint Cerf gave an interview earlier this year where he repeatedly expressed disdain for NAT.

"But people had not run out of IPv4 and NAT boxes [network address translation lets multiple devices share a single IP address] were around (ugh), so the delay is understandable but inexcusable."

Here we see what I presume is Cerf's main viewpoint on NAT: it's an ugly mechanism that is mainly used to avoid address exhaustion.

One of the benefits of IPv6 is a more direct architecture that's not obfuscated by the address-sharing of network address translation (NAT). How will that change the Internet? And how seriously should we take security concerns of those who like to have that NAT as a layer of defense? Machine to machine [communication] will be facilitated by IPv6. Security is important; NAT is not a security measure in any real sense. Strong, end-to-end authentication and encryption are needed. Two-factor passwords also ([which use] one-time passwords).

I respectfully disagree with the comment about security. I suspect his point of view is that you can just as well use firewall rules to block incoming connections. Speaking as someone who has set up multiple sets of firewall rules, I can attest that they are fiddly and error prone. You get a much more reliable guarantee against incoming connections if you use a NAT router.

In parting, let me note a comment in the same interview:

Might it have been possible to engineer some better forwards compatibility into IPv4 or better backwards compatibility into IPv6 to make this transition easier? We might have used an option field in IPv4 to achieve the desired effect, but at the time options were slow to process, and in any case we would have to touch the code in every host to get the option to be processes... Every IPv4 and IPv6 packet can have fields in the packet that are optional -- but that carry additional information (e.g. for security)... We concluded (perhaps wrongly) that if we were going to touch every host anyway we should design an efficient new protocol that could be executed as the mainline code rather than options.

It is not too late.