We just upgraded one of our WAN links and we're trying to get an accurate measurement of the new throughput. We've been playing around with jPerf and we've been getting totally different results depending on whether we use TCP or UDP. For example, running a TCP test we get less than 1 Mbps throughput. Running a UDP test (with which we're able to set the amount of data we push) we're able to get throughput as high as 300 Mbps. I should point out that I was running this test to a server across the country with a round-trip latency of about 80 ms.
So which is the more accurate reflection of our throughput, TCP or UDP? I suspect the TCP is getting bogged down because of the high latency. Maybe TCP is sending packets up to its window size, then because of the latency it doesn't receive an ACK right away, so it's shrinking its window, and jPerf interprets this as window resizing due to a lack of available bandwidth.
UDP, on the other hand, will just pump out UDP packets and depending on the replies it gets from the server end, it will determine how many of those UDP packets were received. Based on that, it will determine bandwidth. So if that's the case, I would think UDP would be a purer reflection of your available bandwidth.
Any thoughts on this? Hopefully someone out there is an iPerf/jPerf expert (and also a TCP/UDP expert).