Friday, November 6, 2009

stty on a mac

Okay, I bought a mac. First step: listen to PrintfUART messages. Normally on Linux, you can do a
stty -F /dev/ttyUSBX 57600
tail -f /dev/ttyUSBX
Unfortunatly, this doesn't work on a mac. No combination of stty options seem to do the trick to change the baud rate. Since other programs can change the rate, I assume tcsetattr actually works correctly. Fortunatly, a quick python script will do the trick:
import sys,serial
ser = serial.Serial(sys.argv[1], 57600)
while not done:
try:
sys.stdout.write(ser.read(1))
except:
done = True

Wednesday, December 10, 2008

More Tools

Always impressed with a good tool, I've just learned about tcptrace; the homepage is here. It can take a tcpdump dump and generate basically any graph or statistic you could possibly want; throughput, data-in-flight, time-sequence number plots, etc. It outputs graphs in xplot format. A word of warning; the debian package version of tcptrace worked fine for me but the debian version of xplot wouldn't accept its output; maybe there's a version mismatch. I'm not actually sure the debian xplot is the same xplot that tcptrace expects. Building xplot from source fixed the problem.

Also of interest is xpl2gpl, which can convert the xplot files to gnuplot files, if that's your thing. It seemed to work okay...

Thursday, November 27, 2008

pathchar

I just had one of those serendipitous google experience where you learn something-- the snippits bar about gmail had a link to a company called Packet Design, which has developed a tool to improve visibility into IP routing protocols; sounds like BGP, IS-IS, EIGRP, and OSPF mostly. Anyways, I always look at the people first, and in this case it looks like both Van Jacobsen and Judy Estrin are on the board; okay, this seems serious. However, in Van Jacobsen's bio it lists several tools he's developed like traceroute, and one I had never heard of: pathchar. Maybe this is well known, but no one brought it up in class the other day when we were all talking about estimating hop-by-hop latency. Several [very old] binaries are available at ftp://ftp.ee.lbl.gov/pathchar/; the linux one sort of seemed to work for me. What's very interesting are the presentation slides explaining the challenges of estimating link latencies; he notes (obviously, I suppose) that queuing delay is what makes this hard, although you can easily filter it on a single hop.

Thursday, November 20, 2008

Scalable Application Layer Multicast

The authors in this paper present their design of a hierarchal application-layer multicast protocol they term NICE. In this context, an "application-layer" implementation of a network protocol means that all protocol functionality is present at the edges of the network; in this case only in the actual users of the data, in contrast to a "network-layer" network protocol (what is standard) where the forwarding and routing is distributed across the network fabric. What this means in practice is that application layer overlays must be constructed out of point-to-point IP links rather then any more exotic delivery model since IP links are what is available.

It seems like if one were attempting to design a "scalable application-layer multicast, it seems like heirarchy is the natural approach since it works so well in making all things scalable; what is novel here is a well-stated algorithm for dealing with joins and leaves; as well as failures and partitions. In this regard, it reads similarly to the Chord paper in that the challenge really seems to be dealing with the corner cases well.

This actually seems like a pretty reasonable scheme. I suspect a lot of applications could or do have this sort scheme built in them since it lets them scale nicely.

Monday, November 17, 2008

An Architecture for Internet Data Transfer

This must be a very good paper, because I was really getting ready to hate it, and by the end it had pretty much sold me that this was a good idea. I think the main lesson here is just how deeply knowing about BSD sockets can sink into your psyche...

Anyhow, the basic idea here is to apply the separation of control and data planes to internet transport. Essentially, a low-latency protocol can be used for signaling while a whole variety of different bulk transfer protocols can be used to move the data. I think there are a couple of interesting pitfalls this work could have taken, and I think they avoided most of them. First, paper isn't really about inventing new transport protocols, although they did and it's nice that they work. Second, I think it would be possible to do this work without producing anything useful. What really sold it to me was the example of integrating a USB key or floppy into the network for large transfers; this is actually a useful artifact because there are definite times when you wish you could just data onto a key but its locked in an application which can't dump data cleanly except to a socket.

By taking a well defined problem the authors were also able to integrate some of the good ideas from content addressable networks to a very relatable example. This paper should definitly remain on the list.

A Delay-Tolerant Network Architecture for Challenged Internets

This paper begins with the statement mostly-disconnected networks cannot participate directly in most internet protocols because those protocols were not designed for this regime. The paper then proposes a dramatic architectural shift which leaves no layer untouched. The architectural idea is that each network in a challenged regime has already been optimized with naming and configuration mechanisms designed for the particular space developed. Thus [it is claimed] what is necessary is a new layer of glue consisting of a new generalized naming layer and set of gateways to bridge between these challenged networks like military ad-hoc networks or sensor networks and other, more reliable networks like the internet. This paper proposes a lot in eight pages.

Without going further, I think it's worth bringing up what I think is the central architectural fallacy of this paper: a new naming layer is necessary to support these protocols. The reason this is a fallacy is that we already have such a layer which is incredibly general and imposes none of the semantics which this paper seems to view as problematical. It's called IP. While I agree that below IP the link layers differ, within IP routing differs, and above IP transport cannot remain the same, the fact is that IP presents a naming mechanism which seems no worse then what is proposed here. We already have a name for "DTN Gateways": routers.

The paper does have a number of bright spots. They do consider SMTP as valuable prior work, although noting that static mail relays will never work in a mobile environment. It also correctly discards application layer gateways as blemishes on a network architecture as they spread application code throughout the network.

If you examine the set of tasks presented in section 4 like path selection, custody transfer, and security, the issues sound no different then what would be expected for bringing the internet to a new network layer, and so I would concluded that the paper doesn't really succeed in making the case for a wholesale architectural shift; as is noted in the conclusion, an evolutionary approach of better link technologies and more adaptive internet protocols seem likely to solve the problem this paper addresses.

Friday, November 14, 2008

DNS Performance and the Effectiveness of Caching

This is a relatively modern study of how often DNS queries can be serviced from a cache based on snooping packets from the MIT Computer Science Laboratory, and from traces from KAST. Their most interesting result is that they claim reducing TTLs to as little as a few hundred seconds has almost no effect on hit rates. They further claim that sharing a cache between more then a few clients is not beneficial due to the long-tailed nature of DNS requests. These conclusions seem reasonable for the small number of users in each of their studies; they seem to claim that they will hold for larger groups as well because of the scaling properties of the hit rate in Fig 12. This figure is making a statement about the overlap in the set of domain names requested by a set of users. The result that NS records are well cached and so the actual request traffic has a nice pithy quality I admire; furthermore it makes sense and bodes well for things like DYNDns.

This paper has an interesting result, but other then the result I think reading the entire paper might be overkill for the class. I would rather we spend some time on security and discuss DNSSec, since our reading list has almost no papers considering security on it and I think that paper would spur an interesting discussion.