Tuesday, September 30, 2008

Readings for Lecture Sep 30, 2008

MACAW: A Media Access Protocol for Wireless LAN’s

The emerging of a wide variety of mobile computing devices like palmtops, personal digital assistants, and portable computers with the demand for network connection has motivated a new generation of wireless network technology and WLAN is a crucial enabling technology. The MAC layer of WLAN can be very different from traditional wired LAN due to its nature of shared, and scarce, resource.

The WLAN infrastructure consists of a “base station”, and pads, which are custom, built portable computing devices. All the wireless communication is between a pad and a base station. The range of transmission is 3 to 4 meters, and the near field signal strength decays very rapidly. Thus around each base station, a very small cell with very sharply defined boundaries, is called “nanocell”.

A collision occurs when a receiver is in the reception range of two transmitting stations and is unable to cleanly receive signal from either station. Capture occurs when a receiver is in the reception range of two transmitting stations, but is able to cleanly receive signal from the closer station. Interference occurs when a receiver is in range of one transmitting station and slightly out-of-range of another transmitting station but unable cleanly receive the signal of the closer station due to the interfering presence of other signal.

The common wireless multiple access algorithm is CSMA, in which every station senses the carrier before transmitting. This algorithm has problem at the receiver since the senders cannot sense the carrier at the receiver so the senders are hidden from each other. The MACA algorithm introduces a protocol with RTS-CTS exchange to initiate a connection. MACA uses the binary exponential backoff (BEB) to determine the retransmission time.

The ACK is implemented at the link layer to enable fast error recovery at the link layer due to frequent packet collision and corruption by noise. To further improve RTS-CTS model, a Data-Sending packet is sent at the sender to let other stations know the RTS-CTS has successfully implemented. The RRTS is added to help the receiver inform the sender that it is available to send. The efficiently implement the backoff algorithm, each station should maintain a separate backoff counter for each stream.

A Comparison of Mechanisms for Improving TCP Performance over Wireless Links
TCP reponds to all losses by invoking congestion control and avoidance algorithm, resulting in degraded end-to-end performance in wireless and lossy systems. The paper classify the schemes into three broad categories: end-to-end protocols, where loss recovery is performed by the sender, link-layer protocols, the provide local reliability, and split-connection protocols that break the end-to-end connection into two parts at the base station.

The end-to-end throughput and goodput are used as performance metrics to seek the answer of the specific questions: What combination of mechanisms results in best performance for each of the protocol classes? How important is it for link layer schemes to be aware of TCP algorithm to achieve high end-to-end throughput? How useful are selective acknowledgements in dealing with lossy links, especially in the presence of burst losses? Is it important for the end-to-end connection to be split in order to effectively shield the sender from the wireless losses and obtain the best performance? The goodput for any path (or link) is defined as the ratio of the actual transfer size to the total number of bytes transmitted over that path.

The link-layer protocols have two main classes of techniques: error correction, using techniques such as forward error correction (FEC), and retransmission of lost packets in response to automatic repeat request (ARQ) message.

The split connection protocols split each TCP connection between a sender and receiver into two separate connections at the base station. Over the wireless hop, a specialized protocol tuned to the wireless environment may be used. One proposed two protocols: one in which wireless hop uses TCP, and another in which the wireless hop uses a selective repeat protocol (SRP) on top of UDP.

The snoop protocol introduces a module, called snoop agent, at the base station. The agent monitors every packet that passed through the TCP connection in both directions and maintains a cache of TCP segments sent across the link that have not yet been acknowledged by the receiver. A packet loss is detected by duplicate acknowledgement or a local timeout. The snoop agent retransmits the lost packet if it has it cached and suppresses the duplicate acknowledgement.

Selective acknowledgements contain information about up to three non-contiguous blocks of data that have been received successfully by the receiver. An alternate proposal, SMART, uses acknowledgements that contain the cumulative acknowledgement and the sequence number of the packet that caused the receiver to generate the acknowledgement.
The link layer protocols using knowledge of TCP improve the throughput by 10-30%. The SMART-based selective acknowledgement yields good throughput.

Tuesday, September 23, 2008

Readings for Lecture Sep 23, 2008

Scaling Internet Routers Using Optics

The Internet is widely reported to have a glut of capacity with average link utilization below 10% and a large fraction of installed but unused link capacity. Thus, the capacity of routers should grow since the demand for network capacity continues to double every year while doubling the number of router each year is impractical.

Current multi-rack systems suffer from unpredictable performance and poor scalability. This causes problems to operators since they do not know what utilization they should apply to their routers then they cannot full exploit their long-haul links. The paper explains how they can use optics with almost zero power consumption to design architectures with predictable throughput and scalable capacity.

The load-balanced switch consists of a single stage buffer sandwiched by two identical switching stages. The buffer at each intermediate input is partitioned into N separate FIFO queues, one per output, hence called Virtual Output Queue (VOQ). There are total N x N such queues. Both the switching stages walk through a fixed sequence of configuration.

The sequence of the switch configurations is pre-determined regardless of the traffic or the state of the queues. The load-balanced switch has 100% throughput even for non-uniform traffic. The two switches at rate R/N then can be replaced by a single switch running twice as fast. The Full Order Frames First (FOFF) can be used to bound the number of mis-sequenced packets in the switch to at most N^2 + 2.

This router architecture, which can guarantee throughput, can be useful to design routers for real-time Internet protocols.

A Fast Switched Backplane for a Gigabit Switched Router

Router functions can be separated in two types Datapath functions often implemented in
hardware and Control functions often implemented in software. There are three trends in designing high performance routers: 1) Datapath is implemented more in hardware; 2) Parallelism is employed more; 3) The most important one is the trend to more away from shared buses since buses are shared between multiple functions can be congested.

Early routers built around conventional computer architecture with a shared central bus, a
central CPU, memory and line cards have the main limitation is that the central CPU has to process every packet, thus limiting the throughput of the system. The improved architecture with multiple CPUs with local forward decision, however, is still limited since the forward decisions is made in software can be outperformed by careful design ASIC in forwarding decisions, managing queues and arbitrating access to the bus. The shared bus in this architecture could be the second limiting factor since one packet can traverse the bus at a time.

By replacing local CPUs with hardware forwarding engines on line card and replacing the bus with a crossbar switch can solve the problem. The performance is further boosted using multiple crossbar switches or slices in parallel to form a switch core. The packet length is chosen to be fixed so the ease and efficiency in design. The head of line blocking problem is fixed by using virtual output queueing (VOQ).

The scheduling algorithm has to satisfy the following properties: high throughput, starvation free, fast, and simple to implement. The iSLIP algorithm is chosen. The priority packet-scheduling scheme can make this architecture suitable for the RSVP protocol.

Thursday, September 18, 2008

Readings for Lecture Sep 18, 2008

Fundamental Design Issues for the Future Internet

The paper discusses the fundamental architectural design issues of “future“ Internet. The Internet at the paper’s time offers only single class of “best-effort” service and is neutral; there is not admission control and the network does not guarantee the about when or if packets will be deliver.

The emerging real-time applications like video and voice have different characteristics and requirement from data application could significantly alter the Internet architecture since real-time applications do not perform adequately when running over the current Internet due to the variations in transmission delay. In addition, real-time applications cannot back off during congestion and they continue contend for bandwidth with data application resulting in the lack of bandwidth for data applications.

The above issue can be resolved by using fair queueing and modifying the implementation of applications to adapt to delay variations. However, without admission control, the network can run into the situation where fair bandwidth is too small. Thus, it is necessary to extend current internet service model from one best effort class to include a variety of service classes.

The goal of the network design then is to maximize the performance of the resident applications, called efficacy V≡ΣiUi(si). V can be improved without the necessity to supply more bandwidth by provides a wider variety of services rather than just a single class of best effort service.

To determine the service for a flow, there are two possible ways. First, network can implicitly supply; this means that the application does not tell the network about its service requirement. However, this requires embedding application information into the network, which violates the Internet tenet. Second, applications can explicitly request their desired service class. This also has some disadvantages like the incentive for users and the lack of flexibility of service model.
The paper does not address the problem of delay bound and also assumes that the bandwidth will be extremely inexpensive.

Supporting Real-time Applications in an Integrated Services Packet Network Architecture and Mechanism

The paper proposes an Integrated Services Packet Network (ISPN) architecture that supports two distinct kinds of real-time services: guaranteed service which involves pre-computed worst case delay bounds and predicted service which uses the measured performance of the network in computing delay bound.

The characteristics of a class of real-time application dubbed playback applications are discussed to indentify requirements for the design proposal. The playback class has four important points: 1) it prefers low delay, 2) The applications need to know about the absolute or statistical bound of the delay, 3) Data just need to arrive before the play-back point, 4) The play-back applications can tolerate the loss of a certain fraction of packets.

The guaranteed service is if the network hardware is functioning and the client is conforming to its traffic characterization, then the service commitment will be met (does not require the conformant of other network clients). The predicted service is that the network commits if the past is the guide to the future, then the network will mot its service characterizations.

The Weighted Fair Queueing (WFQ) algorithm is suitable for guaranteed service. This scheduling does not favor the bursty sources. The FIFO algorithm is better for packet scheduling in predicted service, which provide better the utilization of the network, however, does not provide any isolation. The unified algorithm of WFQ and FIFO can handle both types of services in that WFQ scheme is used as a framework into which other scheduling algorithms can be fitted. All predicted services is assigned a pseudo WFQ flow (this flow has a constant data rate).

The paper assumes that most of future real-time application will fit the playback applications. Other types of real-time applications are not discussed in this paper like real-time control systems, which do not tolerate the loss of data.

Tuesday, September 16, 2008

Readings for Lecture Sep 16, 2008

Random Early Detection (RED) Gateways for Congestion Avoidance

Different from DECbit congestion avoidance scheme, the RED gateways detect potential congestion by computing the average queue size. Then the gateways can implicitly notify the sources of congestion connections either by dropping packets at the gateways or by setting a bit in packet headers then when the destinations of connections return ACK to the source, it includes this bit.

The next improvement of RED is the mechanism to drop a packet. When the average queue length exceeds a certain drop level, then the gateway drops each packet arriving at the gateway randomly with a fixed drop probability. If the average queue length exceeds a certain max drop level then every packet arriving is marked.

For each queue, the average queue length, avg, is estimated by: avg = (1 – weight) * avg + weight * currentQueueLen. If minth ≤ avg ≤ maxth, then the probability pb = maxp(avg – minth)/(maxth – minth) and pa = pb/(1 – count * pb). If avg ≥ maxth all the arriving packet are marked.

The value minth reflects the utilization of the link. If the typical traffic is rather busty, then minth has to be set to corresponding large to allow the link to be maintained at acceptable high bandwidth. While maxth depends in part on the maximum average delay that can be allowed by the gateway. The RED gateway function most effectively when maxth - maxth is larger then the typical increase in the calculated average queue size in one roundtrip time.

The mechanism presented in this paper is particularly useful in networks where it is undesirable to drop packet since the RED gateways predict and control the average queue size before congestions can happen. However, The paper does not specify whether the queue size should be measured in bytes or in packet.

Congestion Control for High Bandwidth-Delay Product
Explicit Control Protocol (XCP) generalizes the explicit congestion notification. In XCP protocol, the efficiency control and fairness control is separated. The senders in XCP maintain their congestion window cwnd and round trip time rtt and communicate these to the routers via a congestion header in every packet. Based on the difference between the link bandwidth and its input traffic rate, the router tells the flows sharing that link to increase or decrease their congestion windows.

Each XCP packet carries a congestion header, which is used to communicate a flow’s state to routers and feedback from the routers on to the receivers. The H_cwnd is the sender current congestion windows, H_rtt is the sender’s current RTT estimate. The H_feedback is initialized by the sender and modified by the routers along the path to directly control the congestion windows of the sources.

Whenever a new acknowledgment arrives, positive feedback increases the senders cwnd and negative feedback reduces it: cwnd = max(cwnd : H_feedback, s) where s is the packet size. The receivers whenever acknowledging a packet, it copies the congestion header from the data packet to its acknowledgement.

The efficiency control’s (EC) purpose is to maximize link utilization while minimizing the drop rate and persistent queues. Since the EC is separated from fairness control, so the aggregate feedback φ is computed for each control interval: φ = α.avgRTT.SpareBandwidth - β.PersistentQueueSize. The aggregate feedback φ computed from EC is divided equally between all flows (positive or negative). The principle for converging to the fairness is Additive Increase, Multiplicative Decrease.

The delay issue is not really evaluated in this paper. The question is that can we extend the protocol to guarantee timing delay of a packet is bounded?

Thursday, September 11, 2008

Readings for Lecture Sep 11, 2008

Analysis and Simulation of a Fair Queueing Algorithm

The queueing algorithms allocate three nearly independent quantities: bandwidth, promptness and buffer space. The first come first served (FCFS) queueing algorithm is not adequate to prevent ill-source from sending data to gateway at sufficiently high speed to capture all or high fraction of bandwidth. The fair queueing (FQ) algorithm proposed by Nagle can handle this issue by maintaining separate queues for packets from each individual source so each source cannot interfere with other sources.

However, Nagle’s proposal does not really satisfy the fairness in bandwidth since a source using long packets gets more bandwidth than a source using short packets. Additionally, static round-robin ordering violates the continuity requirement.

The max-min fairness states that an allocation is fair if: no user gets more than it requests, users with demand lower than average allocation resource get their demand and removed from list, users with demand higher than the average demand left get equal share of resource.

The straightforward bit-by-bit round-robin algorithm in which each time slot is given to one bit from one source sequentially is not realistic. The alternative way of implementing this algorithm is to collect all bits of one packet from one source and transmit the packet of a source with the smallest number of bits transmitted plus the number of bits in the packet, say F. This algorithm is thus emulating the bit-by-bit round robin. The preemptive version of this algorithm, newly arrived packet with smaller F will preempt the transmitting of the current transmitting packet.

Queueing non-negative delay δ is added to help inactive conversations have faster service. Now the sending order is determined by B¬i = Pi+max(Fi, R(ti) - δ), not Fi anymore. If a conversation is active then R(ti) ≤ Fi, thus δ has no effect to active conversations. In inactive conversations, R(ti) > Fi, the previous history of the source is taken into account.

This algorithm is complex in the sense that it has to keep state for each flow through a gateway, which is hard, and then the number of flows is large. Another issue is that how a gateway knows when a flow terminates so that it can clear the information about that flow.

Core-stateless Fair Queueing Architecture Achieving Approximately Fair Bandwidth Allocations in High Speed Network

To reduce the complexity of fair queueing (FQ) algorithm, an architecture call core-stateless FQ (CSFQ) can be used. In that architecture, only edge router maintain per flow state by estimating the incoming rate of each flow and insert a label into each packet header based on this estimation. Core routers maintain no per state; instead they use FIFO scheduling augmented by a probabilistic dropping algorithm that uses the packet labels and an estimate of the aggregate traffic at the router.

In an ideal model, fluid models, which are bufferless, the arrival rates, ri(t) of packets are supposed to know exactly and flows are modeled as a continuous stream of bits. The max-min bandwidth for flow i is a rate given by min(ri(t), ∝(t)), where ∝(t) is the fair share rate of the server. If the total arrival rate A(t) = Σri(t) is less than the output link speed C, then no bit is dropped. Otherwise bits from flows i that have ri(t) > ∝(t) will be dropped with the probability of max( 0, 1 - ∝(t)/ri(t)).

The packet algorithm is extended from the above algorithm with packetized transmission and buffering mechanism. The flow arrival rate is not fixed, instead it is estimated the flow rate of each flow in the form of e-T/K. The estimated aggregate arrival rate A is also estimated in the form e-T/K. This CSFQ can be continued to extend to support flows with different weights.
The flow rate is not fixed and supposed an exponential form that helps the estimated rate converge to real rate and allows to bound the excess service. The size of each island also needs more consideration in detail since this can affect the latency of the systems.

The paper does not discuss the latency of the approach while this can be a major issue for realtime-distributed systems. This architecture can be extend for future research to handle real time system communications in that edges router will not only estimate the flow but also classifies packets according to some patterns such that it still preserve the order of packets.

Tuesday, September 9, 2008

Readings for Lecture Sep 9, 2008

Analysis of the Increase and Decrease Algorithms for Congestion Avoidance in Computer Network

The co-existence of new technologies like Local Network Area (LANs) and fiber optic LANs and old technologies with low bandwidth like the twisted pair resulted in the discrepancy in the arrival and service rates in the intermediate nodes in the network causing increasing queue and congestions.

To allow networks to operate in the optimal region of low delay and high throughput, congestion avoidance is needed to prevent networks from being congested. As the load increases, the throughput increases. After the load reaches the network capacity, the throughput will stop increasing. If the load continues to increase further, the queues start to build up and this potentially results in packet being dropped. One of the congestion avoidance schemes is “binary feedback scheme” with increase/decrease algorithms can be used to address the problem.

The criteria for selecting control are efficiency, fairness, distributedness, and convergence. The efficiency of resource usage is the closeness of the total load on the resource to its knee, Xknee = Σxi(t). Users should be given the same amount of resource if they belong to the same equivalent class. They should be treated fairly by using some fairness index such as: F(x) = (Σx¬i)2/n( Σxi2). The control scheme should be distributed instead of being centralized. The system should be relatively stable, this means that the control scheme should converge, not really to a steady state but it should reach an “equilibrium”.

For the linear control: xi(t + 1) = aI + bIxi(t) if y(t) = 0 => Increase; xi(t + 1) = aD + bDxi(t) if y(t) = 0 => Decrease. With different assignments to aI, bI, aD, bD, we have different control functions. With different criteria for selecting control, we obtain constraints for aI, bI, aD, bD and thus, we establish the feasible control region. The time to converge is disproportionate with respect to the oscillation size for the efficiency criterion. For optimal convergence, the policy should be additive increase and multiplicative decrease.

The problem in the paper is very abstracted. To apply the results to practice, many other issues need to take into consideration such as the resource allocations must be integral and thus, simple rounding off to the nearest integer may cause violation of convergence criteria. Additionally, the scales and parameters in the papers are not easily gathered automatically and often require human help to configure and so the algorithm is more vulnerable to human error.
The paper has not discuss the affect of delayed feedback to the control and seemingly, it assumes that the communication time is zero. Further extension from “binary” feedback to multi-valued feedback would help in describing the exact state of the receivers. The paper also assumes that the number of users stays the same during time and all users start from initial state and follow the same phases of computation and feedback.

Congestion Avoidance and Control

Starting from a sudden factor-of-thousand drop in bandwidth, the authors try to investigate why things had gotten so bad. The principle of “conversation of packet” is that, when a connection in equilibrium, a new packet is not put into the network until old packet leaves. The physics of flow predicts that systems with this property should be robust in the face of congestion.

However, from observation of the Internet, this was not particularly robust. The reasons for this failure would be: connection does not reach equilibrium, or a sender injects a new packet before an old on has exited, or the equilibrium can’t be reached because of resource limits along the path.

To help connection reach equilibrium, a slow start scheme is used. Each connection will have a congestion window, cwnd. Whenever starting or restarting after a loss, set cwnd to one packet. Each acknowledgment for new data, increase cwnd by one packet. When sending, send the minimum of the receiver’s advertised window and cwnd.

The second problem is originated from neglecting the variation of the round trip time R since R would increase quickly with load. Thus, the retransmit timeout interval, RTO, has to be estimate accurately from updated R.

Congestion avoidance scheme is used to adapt the connection to a path. The strategy employs additive increase / multiplicative decrease policy: on any timeout, set cwnd to half the current window size; on each acknowledge, increase cwnd by 1/cwnd; when sending, send the minimum of the receiver’s advertised window and cwnd. This strategy can be easily mistaken with slow start scheme, however actually, congestion avoidance scheme will set the start threshold for the slow start, otherwise, the cwnd will increase unbounded.

The paper deals with algorithm for endpoints, which is not really relevant if the two endpoints are farther away from each other. The algorithm should be extended for gateway congestion avoidance. This could be difficult since each gateway often has many different flows.

Thursday, September 4, 2008

Readings for Lecture Sep 4, 2008

Interdomain Internet Routing

Wide area routing architecture is divided into autonomous systems (ASes) that exchange reachability information. Each AS is owned and administered by a single commercial entity, implements a set of policies in deciding how to route its packet to the rest of the Internet as well as how to export it routes to other ASes. A unique 16-bit number identifies each AS.

Within each AS, different routing protocols operate called Interior Gateway Protocols (IGPs). Between ASes, Interdomain routing protocols like Border Gateway Protocol (BGP) are used and called Exterior Gateway Protocols (EGPs). BGP was designed based on three important needs: Scalability, Policy, and Cooperation under competitive circumstances.

There are two prevalent forms of AS-AS interconnection namely provider-customer transit and peering form in which to ASes (typically ISPs) provide mutual access to a subset of each others’ routing table. Each ISP exports a part of its routing table to other ISPs by filtering according to its policy such that it can gain benefit if the filtered routing paths.

Routes then imported by ASes in the following priority order: customer > peer > provider. This rule can be implemented in BGP using a special attribute that’s locally maintained by routers in an AS, called the LOCAL PREF attribute. There are two types of BGP sessions: beg sessions are between BGP-speaking routers in different ASes, while iBGP sessions are between BGP-speaking router within the same AS. eBGP routers after obtaining information about external world will send the information to BGP routers inside ASes via iBGP sessions. iBGP is different from IGPs like OSPF and RIP.

BGP is actually a rather simple protocol but its configuration flexibility makes it complex. Another issue would be that BGP was not designed for rapid fault detection and recovery, and furthermore, upon the detection of a fault, a router sends a withdrawal message to its neighbors, to avoid massive route oscillations, the further propagation of such route announcements damped. This damping can cause delay.

One interesting feature of BGP would be on policy like hot-potato routing. This paper does not discuss this issue in detail on how to allow cold-potato routing. Thus, future research would be related to policy, failover, scalability, configuration, and correctness.

On Inferring Autonomous System Relationships in the Internet
A pair of ASes is interconnected via dedicated links and/or public network access points, and routing between ASes is determined by the Interdomain routing protocol such as Border Gateway Protocol (BGP). The BGP is closely related to policy of ASes (ISPs).

Thus, the relationships between ASes would be an important aspect of Internet architecture. Each AS set up its own BGP export policies according to its relationships with other ASes. The paper proposes a heuristic algorithm to classify AS relationships basing on BGP routing tables. This work helps understand structural properties of the Internet.

The AS relationships is represented by an annotated AS graph. This graph is partially directed graph whose nodes represent ASes and whose edges are classified into provider-to-customer, customer-to-provider, peer-to-peer, and sibling-to-sibling edges.

An AS selectively provides transit services for its neighboring ASes and it sets up its BGP export policies according to selective export rule. The selective export rule ensures that BGP routing table entries only contain Valley-Free AS paths. The uphill (or downhill) top provider of an AS path to be the AS that has the highest degree among all ASes in its maximal uphill (or downhill) path.

The algorithm goes through the AS path in each routing table entry and finds the highest degree AS and lets the AS be the top provider of the AS path. From the top provider, the algorithm traces to infer the relationships between two consecutive ASes in the path.
However, the above algorithm does not identify peering relationships. An AS pair have a peering relationship if and only if the AS pair do not transit traffic for each other. This is the original idea of the heuristic algorithm.

This paper is interesting in the sense that ISPs or companies can use AS relationship information to plan for future contractual agreement.

Monday, September 1, 2008

Readings for Lecture Sep 2, 2008

End-to-end Arguments in System Design

The paper discusses the design principle, called end-to-end argument, to choose the proper boundaries between functions. The careful file transfer problem is used to articulate the argument by appealing to application requirements and provides a rationale for moving function upward in a layered system, closer to the application that uses the function.

The paper assumes the specific steps for the problem then presents possible threats for failures in transferring a file from one host to another host. The paper then considers the solutions to deal with possible threats namely checksum to detect failures and retry/commit plan. Performance aspects are used to justify placing function in a low-level subsystem and consider the tradeoff. However, it concludes that the end-to-end check of the file transfer application must still be implemented no matter how reliable the communication system becomes.

By considering further end-to-end argument examples such as delivery guarantees, secure transmission of data, duplicate message suppression, guaranteeing FIFO message delivery and transaction management, the paper argues that application designers need to understand applications carefully and the lower-level subsystem that supports a distributed application may be wasting its effort providing a function that must by nature be implemented at the application level anyway can be applied to a variety of functions in addition to reliable data transmission.

By discussing the voice connections between two digital telephone instruments, the paper demonstrates that the requirements of each application will play an important role in indentifying and analyzing application and protocol design, literally speaking, what is the most important criterion for one application.

This paper is interesting in the sense that it presents several design perspectives for readers and designers to consider the tradeoff of implementing applications such as peer-to-peer applications. The advantages and disadvantages of each implementation in different scenarios would still be very helpful since possible threats are pointed out.
However, they have not pointed out how to improve the performance as well as the efficiency when implementing such applications. Future research would be on guaranteeing timing requirements for real-time distributed systems.

The design philosophy of the DARPA Internet protocols

The paper aims at providing the reasons for the TCP/IP protocol. Beginning with the primary goal of designing TCP/IP protocol is the effective technique for multiplexed utilization of existing interconnected network using packet switching. Then, a set of second level goals is summarized. The goals in the set with their relations are discussed to come up with solutions lying in the protocol.

Then the way of architecting, implementing and verifying based on the set of goals is discussed how to improve performance. The advantages of the use of datagrams, the fundamental Internet architecture feature, are presented and given explanations. Last is a brief review of the TCP history.

The packet switching is selected for multiplexing interconnected network to utilizing existing interconnected networks. Processor called gateways which implement a store and forward packet forwarding algorithm. To face with failure in some networks and gateways, the state information which describe the on-going conversation must be protected and stored at the endpoint of the net, at the entity which is utilizing the service network instead of at the intermediate packet switching nodes.

TCP is not suited to real time delivery of digitized speck for teleconferencing since instead of the reliable service, real time digital speech prefers minimizing the delay delivery of packets. Thus more than one transport service would be required and this cause TCP and IP separated into two layers as well as the reason for the emerging of UDP on IP layer.

The paper shows us the behind forces in forming the TCP/IP protocol and briefly sketches the protocol evolution. However, it has not discussed the lower priority goals like resource management. This paper gives readers more insight into TCP/IP design mechanism.

Datagrams model does not seem to be a good fit for resource management issue, we should look into this more in future research. To have better resource management, gateways should contain the information (states) about flows flowing through the gateways to have better processing performance. This is enabled by enforcing endpoints periodically send messages to ensure the proper type of services is associated with the flows.