Monday, September 1, 2008

Readings for Lecture Sep 2, 2008

End-to-end Arguments in System Design

The paper discusses the design principle, called end-to-end argument, to choose the proper boundaries between functions. The careful file transfer problem is used to articulate the argument by appealing to application requirements and provides a rationale for moving function upward in a layered system, closer to the application that uses the function.

The paper assumes the specific steps for the problem then presents possible threats for failures in transferring a file from one host to another host. The paper then considers the solutions to deal with possible threats namely checksum to detect failures and retry/commit plan. Performance aspects are used to justify placing function in a low-level subsystem and consider the tradeoff. However, it concludes that the end-to-end check of the file transfer application must still be implemented no matter how reliable the communication system becomes.

By considering further end-to-end argument examples such as delivery guarantees, secure transmission of data, duplicate message suppression, guaranteeing FIFO message delivery and transaction management, the paper argues that application designers need to understand applications carefully and the lower-level subsystem that supports a distributed application may be wasting its effort providing a function that must by nature be implemented at the application level anyway can be applied to a variety of functions in addition to reliable data transmission.

By discussing the voice connections between two digital telephone instruments, the paper demonstrates that the requirements of each application will play an important role in indentifying and analyzing application and protocol design, literally speaking, what is the most important criterion for one application.

This paper is interesting in the sense that it presents several design perspectives for readers and designers to consider the tradeoff of implementing applications such as peer-to-peer applications. The advantages and disadvantages of each implementation in different scenarios would still be very helpful since possible threats are pointed out.
However, they have not pointed out how to improve the performance as well as the efficiency when implementing such applications. Future research would be on guaranteeing timing requirements for real-time distributed systems.

The design philosophy of the DARPA Internet protocols

The paper aims at providing the reasons for the TCP/IP protocol. Beginning with the primary goal of designing TCP/IP protocol is the effective technique for multiplexed utilization of existing interconnected network using packet switching. Then, a set of second level goals is summarized. The goals in the set with their relations are discussed to come up with solutions lying in the protocol.

Then the way of architecting, implementing and verifying based on the set of goals is discussed how to improve performance. The advantages of the use of datagrams, the fundamental Internet architecture feature, are presented and given explanations. Last is a brief review of the TCP history.

The packet switching is selected for multiplexing interconnected network to utilizing existing interconnected networks. Processor called gateways which implement a store and forward packet forwarding algorithm. To face with failure in some networks and gateways, the state information which describe the on-going conversation must be protected and stored at the endpoint of the net, at the entity which is utilizing the service network instead of at the intermediate packet switching nodes.

TCP is not suited to real time delivery of digitized speck for teleconferencing since instead of the reliable service, real time digital speech prefers minimizing the delay delivery of packets. Thus more than one transport service would be required and this cause TCP and IP separated into two layers as well as the reason for the emerging of UDP on IP layer.

The paper shows us the behind forces in forming the TCP/IP protocol and briefly sketches the protocol evolution. However, it has not discussed the lower priority goals like resource management. This paper gives readers more insight into TCP/IP design mechanism.

Datagrams model does not seem to be a good fit for resource management issue, we should look into this more in future research. To have better resource management, gateways should contain the information (states) about flows flowing through the gateways to have better processing performance. This is enabled by enforcing endpoints periodically send messages to ensure the proper type of services is associated with the flows.


No comments: