Pete Heist <notifications@github.com> writes:

> Trying to confirm how latency was being calculated before with the
> UDP_RR test. Looking at its raw output, I see that transactions per
> second is probably used to calculate RTT, with interim results like:
>
> ```
> NETPERF_INTERIM_RESULT[0]=3033.41
> NETPERF_UNITS[0]=Trans/s
> NETPERF_INTERVAL[0]=0.200
> NETPERF_ENDING[0]=1511296777.475
> ```
>
> So RTT = (1 / 3033.41) ~= 330us
>
> And this likely takes the mean value of all transactions and
> summarizes it at the end of the interval, then the calculated latency
> was what was plotted in flent?

Yup, that's exactly it :)


You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.