From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.github.com (out-17.smtp.github.com [192.30.252.200]) by mail.toke.dk (Postfix) with ESMTPS id 920EF7B8819 for ; Tue, 15 Dec 2020 12:37:53 +0100 (CET) Authentication-Results: mail.toke.dk; dkim=pass (1024-bit key) header.d=github.com header.i=@github.com header.b=I6C09IGH Received: from github.com (hubbernetes-node-a4ee997.va3-iad.github.net [10.48.18.85]) by smtp.github.com (Postfix) with ESMTPA id A17625C0043 for ; Tue, 15 Dec 2020 03:37:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=github.com; s=pf2014; t=1608032272; bh=JH2MqsOgcJEAii38Hzo6wToFIJy+QHlhRpkkEPKmA8w=; h=Date:From:Reply-To:To:Cc:In-Reply-To:References:Subject:List-ID: List-Archive:List-Post:List-Unsubscribe:From; b=I6C09IGHmGWWx+ITQcFz8YhdEUwomCqmXHfawutHDdscl/xfjWipRSXVNDcF+37lR D+zdYEUYGTdSkyX7yoMErGFinNVKtmD+RuVH4YqqeoYx3c6B5MlQSoYEIfGbcvzWcu StgoePvi8XyMqosu/s/nuF9VlaJt42oVwLuuGsGc= Date: Tue, 15 Dec 2020 03:37:52 -0800 From: =?UTF-8?B?VG9rZSBIw7hpbGFuZC1Kw7hyZ2Vuc2Vu?= To: tohojo/flent Message-ID: In-Reply-To: References: Mime-Version: 1.0 Content-Type: multipart/alternative; boundary="--==_mimepart_5fd8a0109dfdd_6019b4548cf"; charset=UTF-8 Content-Transfer-Encoding: 7bit Precedence: list X-GitHub-Sender: tohojo X-GitHub-Recipient: flent-users X-GitHub-Reason: subscribed X-Auto-Response-Suppress: All X-GitHub-Recipient-Address: flent-users@flent.org Message-ID-Hash: TVZSH7ET2LOZ33VBV5XIT3HB3KZXAGVF X-Message-ID-Hash: TVZSH7ET2LOZ33VBV5XIT3HB3KZXAGVF X-MailFrom: noreply@github.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; suspicious-header CC: Subscribed X-Mailman-Version: 3.3.2 Reply-To: tohojo/flent Subject: [Flent-users] Re: [tohojo/flent] rrul tests are not being able to saturate link (#218) List-Id: Flent discussion list Archived-At: List-Archive: List-Help: List-Post: List-Subscribe: List-Unsubscribe: ----==_mimepart_5fd8a0109dfdd_6019b4548cf Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit olg33 writes: > Hi, > > We are installing a new satellite link and we are using flent to run > bandwidth testing. The idea is to saturate the link with marked and > unmarked packets. > > However initial results show that the test was not able to reach the > maximum bandwidth downstream supported by the link which is about > 60Mbps. > > For unmarked traffic we ran the following test: > `flent rrul -H 10.111.40.252` > > For marked traffic we used the following test: > `flent rrul_var -H 10.111.40.252 --test-parameter bidir_streams=5 --test-parameter markings=11,13,15,19,21 ` > > On both cases we reached no more than 50% of the link capacity. > > To rule out any problem with the link itself, we ran the following test: > `flent tcp_12down -H 10.111.40.252` > > This time. we were able to reach saturation levels (60Mbps) of > downstream traffic. Also ran iperf3 and obtained the same results > > Do you see any reason why the rrul and rrul_var tests aren't being > able to generate enough traffic to saturate the link? Maybe any > parameter I'm missing or something? Or maybe that's normal, anyhow, > I'd appreciate any comment on this matter. Hmm, when you say satellite link, I assume this has really high RTT, right? This usually makes it really difficult for TCP to saturate the connection; I believe providers try to improve on this with various kinds of "accelerators" that mess with the TCP connection to try to alleviate this problem. That could be failing when there's bidirectional traffic? Another thing to note is that when you're running bidirectional traffic the bulk traffic will be competing with ACKs in each direction. In particular, if there's a lot of queueing delay in the upstream, that will delay the ACKs which can prevent TCP from ramping up properly on the downstream. What kind of queueing is on the bottleneck link, and are you seeing the latency increase? By my guess the latter effect would be most likely, or maybe a combination? You may be able to see something interesting if you capture the traffic and look at some tcptrace plots of the transfers. The RRUL test is deliberately designed to stress connections to induce these sorts of weird failure cases, so I guess you could say it's not unexpected. But obviously it's not an ideal functioning of the link :) -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/tohojo/flent/issues/218#issuecomment-745233711 ----==_mimepart_5fd8a0109dfdd_6019b4548cf Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: 7bit

olg33 <notifications@github.com> writes:

> Hi,
>
> We are installing a new satellite link and we are using flent to run
> bandwidth testing. The idea is to saturate the link with marked and
> unmarked packets.
>
> However initial results show that the test was not able to reach the
> maximum bandwidth downstream supported by the link which is about
> 60Mbps.
>
> For unmarked traffic we ran the following test:
> `flent rrul -H 10.111.40.252`
>
> For marked traffic we used the following test:
> `flent rrul_var -H 10.111.40.252 --test-parameter bidir_streams=5 --test-parameter markings=11,13,15,19,21 `
>
> On both cases we reached no more than 50% of the link capacity.
>
> To rule out any problem with the link itself, we ran the following test:
> `flent tcp_12down -H 10.111.40.252`
>
> This time. we were able to reach saturation levels (60Mbps) of
> downstream traffic. Also ran iperf3 and obtained the same results
>
> Do you see any reason why the rrul and rrul_var tests aren't being
> able to generate enough traffic to saturate the link? Maybe any
> parameter I'm missing or something? Or maybe that's normal, anyhow,
> I'd appreciate any comment on this matter.

Hmm, when you say satellite link, I assume this has really high RTT,
right? This usually makes it really difficult for TCP to saturate the
connection; I believe providers try to improve on this with various
kinds of "accelerators" that mess with the TCP connection to try to
alleviate this problem. That could be failing when there's bidirectional
traffic?

Another thing to note is that when you're running bidirectional traffic
the bulk traffic will be competing with ACKs in each direction. In
particular, if there's a lot of queueing delay in the upstream, that
will delay the ACKs which can prevent TCP from ramping up properly on
the downstream. What kind of queueing is on the bottleneck link, and are
you seeing the latency increase?

By my guess the latter effect would be most likely, or maybe a
combination? You may be able to see something interesting if you capture
the traffic and look at some tcptrace plots of the transfers.

The RRUL test is deliberately designed to stress connections to induce
these sorts of weird failure cases, so I guess you could say it's not
unexpected. But obviously it's not an ideal functioning of the link :)


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or unsubscribe.

----==_mimepart_5fd8a0109dfdd_6019b4548cf--