Tuesday, September 2, 2008

Internetworking Principles

End-to-end Arguments in System Design

Saltzer, Reed, and Clark work out the principle that the most meaningful functionality in networking ,and in many other systems, should be placed at the "ends," rather then distributed throughout the "middle" or "low-level" of the system. This has a number of very interesting correlaries which follow immediately from this argument; some of them are lain out in the paper.

One design decision which draws immediately from this argument are the semantics of IP packets: packets are delivered zero or more times, in any order. The decision to settle on these best-effort delivery semantics has allowed protocols on top to be very widely applicable. One reason this for this seems to be that
these semantics reflect reality. This is true in the sense that it is "easy" to build a contention-base protocol like Ethernet or anything using wireless which follows these guidelines using nothing more complicated then backoff and retry, while achieving any of the other properties the authors mention results like FIFO delivery or duplicate suppression almost inevitably result in TCP. Furthermore, if those assumptions are baked in to the networking fabric, it becomes much more difficult to handle broadcast domains and multicast communication patterns in the network.

A great section of the paper is the part where they discuss the problem of "identifying the endpoints." One of their principle arguments against message acknowledgments is the fact that application layer acks are almost always what is desired, not message acknowledgments. An interesting way that the paper has aged is the emergence of middle boxes all over networks conduction application-layer stateful packet inspection. The way this splits out "end to end" to "end middle end" is interesting. I think that as far as the data path is concerned, the end to end principle still applies since what the middlebox is interposing is policy control and perhaps routing based on application criteria. In most cases, these boxes are still transparent with respect to the data the endpoints receive.


The Design Philosophy of the Darpa Internet Protocols

What's facinating about this paper is how little has changed on the broader internet. Better tools for distributed management and policy-based routing? Accountability? Security? All things which remain challenges in the internet, and things which don't seem to have migrated to the core of the internet but remain point solutions within AS's. Perhaps this is to be expected from the internet architecture: newcomers will want to do the minimum possible to participate. It is interesting that even though the design emphasizes robustness to failure, it doesn't mention security as we currently understand it, which is tolerance to malicious hosts within the network.
  1. Internet communication must continue despite loss of networks or gateways.
  2. The Internet must support multiple types of communications services.
  3. The Internet architecture must accommodate a variety of networks.
  4. The Internet architecture must permit distributed management of resources
  5. The Internet architecture must be cost effective.
  6. The Internet architecture must permit host attachment with a low level of effort.
  7. The resources used in the internet architecture must be accountable.
Most of these, and in fact their ordering, still seems reasonable. I think you would have to amend 1 to read "Internet communication must continue despite loss of networks or gateways and the introduction of malicious hosts."


No comments: