On Thu, Jan 7, 2021 at 1:35 PM Dave Taht wrote: > See: https://arxiv.org/pdf/2012.14996.pdf Thanks for the link! > > Things I really like: > > * they used flent > * Using "variance" as the principal signal. This is essentially one of > the great unpublished and unanalyzed improvements on the minstrel > algorithm as well > * Conventional ecn response > * outperforms bbr on variable links > What did you have in mind by "variable links" here? (I did not see that term in the paper.) Rather than characterizing the algorithm as using "variance" as the principal signal, my sense is that the estimated BDP is the primary signal, and the algorithm uses variance as a secondary signal to adapt the gain. I would be interested to hear how the algorithm performs in real-world paths with high degrees of aggregation and RTT variance, including wifi, cellular, and 10Gbps+ Ethernet LANs. The paper mentions "TCP D* sets the window to its estimated BDP," and our experience is that setting cwnd to the estimated BDP produces unusably low throughput over these kinds of paths. In these paths the min_rtt is very different from the typical RTT, so setting the cwnd purely using the min_rtt can lead to very significant underutilization: https://datatracker.ietf.org/meeting/101/materials/slides-101-iccrg-an-update-on-bbr-work-at-google-00#page=5 Another interesting aspect is that it seems completely agnostic to packet losses. It would be interesting to see how the algorithm behaves in shallow or mid-sized buffers with a highly dynamic mix of traffic. best, neal