In my first post I would like to discuss a paper written by Paul Baran in 1964 "On Distributed Communications Networks" (http://www.rand.org/pubs/papers/P2626/) Many would agree that it is a must read for building background knowledge in networking research. Just like in many other fields, I believe that understanding where our current technology came from gives a perspective to better understand it and improve it for the future.
Just to put this paper in perspective:
· cold war is in progress
· the fastest circuit speed is T1 1.544 Mbps
· most topologies deployed are centralized or decentralized using reliable links
· each connection is circuit switched which allows for one user on the circuit
In 1964 cold war and enemy attack were a major threat. Paul Baran was working with the military to design a highly resilient network that could withstand the enemy’s attack. In his paper he starts off by comparing the three network topologies Star (centralized), Decentralized (hybrid) and Distributed (meshed). He looks at the survivability of the network and the probability of separating sub-network and nodes from the remainder of the network in the event of an attack.
According to his calculations, networks of 3 average links (and higher) for each node is best suited for the Distributed topology. His paper also shows how survivability of the network can be affected based on: number of links, node destruction probability, link destruction probability and combination of node and link destruction based on the average number of links.
Baran proposed to use cheaper unreliable slower links to connect network nodes with such a high level of redundancy. The main driving force is the distributed topology and the lack of central reliance. With the higher cost of additional links he proposes to allow multiple users on the same circuit using standard message blocks (packet) to send messages.
Message routing is handled using the Hot-potato routing in Baran’s paper. Each message is send to other nodes until it reaches the destination. This is a kind of a blind forwarding of message. For instance if I don’t know where this message goes I’ll just pass it to my one of neighbor, that neighbor will do the same and eventually the message reaches the destination. This is not a very scalable routing. Messages are not stored on the routers but rather buffered. Buffering is the store-and-forward method that’s used in our current routers. The requirement for store-and-forward is that each link can be of different bandwidth. Packets coming at a rate of 1.544 Mbps and going to another interface of lower bandwidth need buffering before being transmitted.
Going back to how handover tag with message routing. Each network node keeps a table made up of links as columns and rows as destinations. Every message's handover tag is record passing through based on the source. (source, link and tag value). This is sort of a topology table like in EIGRP. The best tag values are used for what would be a current routing table. What I found interesting is that the handover tag is used as a form of metric. This metric fills the routing table as opposed to having a routing protocol that takes care of it. He also mentions in his paper that cost can be associated with more preferable links using the handover tag. Basically a node with connection to T1 (1.544 Mbps) line would increment the handover tag with value 1 and 1024 Kbps with a value of 8.