Table of Contents I. Summary 2 II. Objective 2 III. Implementation 2 IV. Results 3 V. Questions and Answers 4 VI. Conclusion 7 Summary Lab 1 demonstrates the capabilities of congestion control algorithms implemented by Transmission Control Protocol (TCP). It provides three scenarios to simulate these algorithms and will later compare the results. A state variable known as congestion window is implemented by the TCP protocol which stops the clogging of the network by regulating and putting a limit on the data that is sent on the network in turn reducing congestion, timeouts and lost packets. Other than this there are other features like slow start, the fast re transmission and rapid recovery also discussed in the Lab. Slow start makes …show more content…
the window and all the calculations start after a delay in TCP sessions other features like fast retransmit and rapid recovery are search mechanisms using which TCP would retransmit the dropped packet as soon as they get timed out. Objective Objective of performing LAB 1 is to know and research about congestion control algorithms implemented in TCP. It usually done by simulating a network in riverbed modeler software and fetching the results from simulation runs. Implementation Capturing congestion window data for TCP, we placed 2 subnets in USA, one on west coast and one on the east coast connected by an IP cloud. It has 2 subnets one of the subnet has a server that hosts a FTP service and the other subnet consists of a client that will join the server using TCP. For this lab 2, we send a 10000000 bytes = 10,000 Kb or 10 MB file during the FTP session and besides that we track congestion windows and sequence segment number. We duplicate the original scenario and create 2 more scenarios, so we create a total of three scenarios having a slight variation. The first scenario will be forced to drop no packet, the second scenario will drop 0.5% of the packets in the IP cloud the third scenario will also have 0.5% packet drop but will have fast retransmit enabled. Results This is the scenario( No_Drop ) in which we kept the no packet drop scenario as we can observe the results there is gradually increasing line for both congestion window as well as the sent segment sequence number window. Why is this? , because there are no packets being dropped leading to zero detection of congestion and also the ascending trend is due to receipt of equally sized packets in sequence. In this scenario ( Drop_NoFast) we see a swing in the congestion window but the sequence segment number window still retains its trend and has a gradual increase except the areas where there is congestion it drops. It is because of 0.5% drop of packets at the IP cloud. Apart from that there are ups and downs in the graph due to packet drops which causes timeouts and hence there are spikes due to those dropped packets being resent this is shown as straight horizontal line on the segment sequence number graph. Questions and Answers 1) Why does the Segment Sequence Number remain unchanged (indicated by a horizontal line in the graphs) with every drop in the congestion window?
Answer: - As we saw in the lab simulation runs, Segment Sequence Number remains unchanged as indicated by the horizontal line because of the congestion caused by packet drops, which in turn causes timeouts which leads to drop in the graph, that causes the congestion window to decreases in size when timeouts are detected, timeouts are due to 0.5% packet drop in IP cloud. 2) Analyze the graph that compares the Segment Sequence numbers of the three scenarios. Why does the Drop_NoFast scenario have the slowest growth in sequence numbers? Answer: Drop_NoFast scenario has the slowest growth in sequence numbers since it has 0.5% of packet drop and even fast retransmit is turned off in the TCP settings. Therefore there are several timeouts in the connection and usually one big timeout period occurs before transmission of packet again. But due to fast retransmit turned on, only 3 round trips of time occurs before a packet is retransmitted so there is a faster recovery. 3) In the Drop_NoFast scenario, obtain the overlaid graph that compares Sent Segment Sequence Number with Received Segment ACK Number for Server_West. Explain the graph.
Hint: - Make sure to assign all values to the Capture mode of the Received Seg Answer: Zoomed in Although it seems that there is no red line (received segment acknowledgement number) in the graph but on close observation we can see that it is perfectly overlaid by blue line (sent segment sequence no.). These overlaid results represent the loss of packets and their retransmission all occurring in the timeout periods in the network. It shows that the client was not able to receive the acknowledgement for a period of time but that period was very minor. 4) Create another scenario as a duplicate of the Drop_Fast scenario. Name the new scenario Q4_Drop_Fast_Buffer. In the new scenario, edit the attributes of the Client_East node and assign 65535 to its Receiver Buffer (bytes) attribute (one of the TCP Parameters). Generate a graph that shows how the Congestion Window Size (bytes) of Server_West gets affected by the increase in the receiver buffer (compare the congestion window size graph from the Drop_Fast scenario with the corresponding graph from the Q4_Drop_Fast_Buffer scenario.) Answer: The red line indicates the increased receiver buffer. Due to increase in buffer, more data bytes can be transmitted in the identical period of time and very few periods of loss of packets occur before the file is transmitted. Due to this increase of buffer the same file is transmitted in about 25 % of the time it took earlier. Conclusion Lab 1 facilitated me to understand and imagine the how congestion occurs in TCP and how the same is controlled. Apart from that I learned how the rest of the features like fast retransmit and altering the buffer size on the receiver’s side can significantly reduce the time taken to transfer data. Also the effects of segment sequence number as well as segment acknowledgement number on control mechanisms.
The unknown bacterium that was handed out by the professor labeled “E19” was an irregular and raised shaped bacteria with a smooth texture and it had a white creamy color. The slant growth pattern was filiform and there was a turbid growth in the broth. After all the tests were complete and the results were compared the unknown bacterium was defined as Shigella sonnei. The results that narrowed it down the most were the gram stain, the lactose fermentation test, the citrate utilization test and the indole test. The results for each of the tests performed are listed in Table 1.1 below.
What does TCP mean? TCP is a set of rules that governs the delivery of data over the internet or other network that uses the Internet Protocol, and sets up a connection between the sending and receiving computers.
Sliding windows, a strategy otherwise called windowing, is utilized by the Internet's Transmission Control Protocol (TCP) as a technique for controlling the stream of bundles between two PCs or system has. TCP requires that every single transmitted data be recognized by the getting host. Sliding windows is a technique by which numerous bundles of information can be insisted with a solitary affirmation.
b) Bottleneck is the resource that has the smallest capacity. It creates the weakest link in the overall process chain and determine the process capacity. In the other word, the process capacity is equal with the minimum capacity of its resource. Since the flow unit needs to be processed by each resource (for example there are a total “n” resources in the process), thus, the process capacity can be written as
Product B as you think did fairly the worst while it would be the slowest despite reaching a constant first it had not
DR positions should be plotted at regular intervals depending on the nature of the passage.
3. 135.46.52.2 Ans: The given address is lower than 135.45.56.0/22. The default route will be used and the packets will be routed out over router 1. 4. 192.53.40.7 Ans: 192.53.40.7 and 255.255.254.0= 192.53.40.0. it matches 192.53.40.0/23 routing entry and the packet will be routed out over router 1. 5. 192.53.56.7 Ans: 192.53.56.7 and 255.255.254.0 = 192.53.56.0. The default route will be used and that packet will be routed out over router 2. QUES 2. A Large number of consecutive IP address are available starting at 198.16.0.0. Suppose that four organizations, A, B, C, D request 4000, 2000, 4000, and 8000 addresses, respectively, in that order. For each of these, give a. the first IP address assigned, b. the last IP address assigned c. and the mask in the w.x.y.z/s
Last phase is data exchange. In data exchange, client and server exchanges the data by creating one or more data channels. In each channel, flow is control using window space available. There are 3 stages of the life of the channel: open channel, data transfer and close channel. One the channel is open by either of the party, data is transferred and then channel is closed by either of the party [3].
When it comes to getting network traffic from point A to point B, no single way suits every application. Voice and video applications require minimum delay variation, while mission-critical applications require hard guarantees-of-service and rerouting.
Next on is the high message rates and speed connections in HFT which relates to the response speed for market order entry , orders quotation and speed for cancellation.
The TCP/IP is the most important internet operation protocol in the world. While IP protocol performs the mass of the functions which is needed for the internet to work. It does not have many capabilities which are essential and needed by applications. In TCP/IP model these tasks are performed by a pair of protocols that operate at the transport layer. The Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP). These two protocols are vital when it comes to delivering and managing the communication of numerous applications. To pass on data streams to the proper applications the Transport layer must identify the target application. First, to be able to attain this, Transport layer assigns an application an identifier. In the TCP/IP model call this identifier a port number. Every individual software process needing to access the network is assigned a un...
Congestion Control Transfer Protocol (CCTP) is an advanced, stable message-determined transport layer protocol. CCTP lies in between the Network layer and Application layer and serves as the agent between network operation and application programs. Figure below shows the IP suite associated with the relationship of CCTP protocol with others. This protocol blends the prominent characteristics of TCP, UDP and SCTP.
After reviewing the charts created from the packets given to the class, I discovered that the results from were right around were I expected them to be.
Im going to overview how TCP/IP works into the entire system. Keeping in mind the OSI reference model (Fig. 1). While TCP and/or
-Packet scheduler check the order of packet transmission to achieve the Quality of Service for multimedia streaming.