[OpenWireless Tech] OpenWRT: traffic prioritization between WLANs

michi1 at michaelblizek.twilightparadox.com michi1 at michaelblizek.twilightparadox.com
Sun Nov 18 05:09:41 PST 2012


Hi!

On 03:15 Sun 18 Nov     , John Gilmore wrote:
> > > 3. I understand that my description above might be suboptimal. For
> > > example, a guest using SSH might have their connection break as soon as
> > > someone on the home network downloads a video file, even though the home
> > > network user couldn't notice the difference in download speed if the SSH
> > > connection TCP packets got enough priority to maintain the connection
> > > (supposing a normal SSH session, not using SSH to tunnel
> > > bandwidth-intensive traffic).
> > 
> > This can happen. It is called starvation.
> 
> There is no reason that an ssh connection should break when a video is
> being uploaded or downloaded.  The reason that it sometimes happens in
> the current Internet is called "bufferbloat" and results from a
> mismatch between TCP's method of detecting network congestion (dropped
> packets) and modern network hardware's method of handling network
> congestion (buffering packets in cheap RAM rather than dropping them).

I know what bufferbloat is (and I suggested codel on an earlier mail).
However, we were talking about something different: The point was that
QoS/priority queueing can cause starvation, if done badly. Also, I think
that these guest networks are cases where QoS is both helpful and legitimate..

> The result is that the TCP connection carrying the video doesn't
> throttle back its transmissions to just what the network can carry; it
> starts filling up the queues in all the switches in between.  Then the
> ordinary TCP connections nearby can't get through those queues.  And
> every connection suffers because of high latency -- especially,
> interactive ones like ssh get laggy.
> 
> In a sense, the problem is because network researchers in the 1970s
> didn't figure out that RAM was going to be cheap.  The right answer
> would've been to have TCP measure the latency, and throttle back if it
> gets longer.  Bram Cohen fixed this a few years agow in BitTorrent: it
> stopped using TCP, it measures the delay itself, and it throttles back
> when the delay increases.  It hasn't bothered ISPs since then.

Measuring the load via latency sound messy. You might have a high bandwidth
connection around the globe. Or you might have a link which was congested
you started creating a new connection. There has been a project some time ago
called "tcp fast" which tried it. AFAIK, it used packet loss is a metric and
jitter as a pre-loss hint. Since than linux got support for plugable congestion
controls and nearly every network engineer in the world started messing around
with it. I guess the "conclusion" is stick to loss as a congestion indicator
and resolve bufferbloat issues.

Also the problem is that if you want the congestion control be be anywhere
close to "fair", you need to adapt its aggressiveness based on the
application. For example, the bittorrent should be very sensitive to packet
loss, because it establishes quite a few connections and has a high duty
cycle. Or better, all others should be more aggressive. Right now they are so
conservative that they often cannot utilise links with high
bandwidth-delay-product.

> Luckily for us, some smart researchers figured out the Bufferbloat
> problem two years ago, starting with Jim Gettys.  They restarted
> research on "active queue management" that would deliberately drop
> packets to let TCP know to throttle back to keep the latency low for
> everybody.  This produced an algorithm called codel ("coddle" -
> controlled delay) and a flow control method for Linux 3.5 or better
> (fq_codel) that implements it.  They even made a variant of openwrt,
> called cerowrt, that includes the codel stuff, plus IPv6, turned on by
> default.  I think that fq_codel is in openwrt itself now, but it
> may not be enabled by default.
> 
> Fixing TCP would've solved this end-to-end.  Using codel only solves
> the bufferbloat problem in the device where you deploy codel.

Most often, the bottleneck is on an the end points. So basically it is about
the subscriber+isp modems. You really should not have a bottleneck in the
backbone. The network will crash rather than degrade. Codel and fair
congestion controls combined may mitigate this. But when the network is
constantly under high load, users will start switching to aggressive
congestion controls.

> (Some
> people think we can't fix TCP now, because a fixed TCP would back off
> when competing with an original TCP, causing people with the fixed TCP
> to get worse network performance by comparison, until all the original
> TCPs near them went away.)

One more reason to avoid congestion on backbones: Every tcp implementation
thinks differently about how fast it can send.

> > > Ideally, the guest should be able to use
> > > SSH while the video file download still gets the lion's share of the
> > > bandwidth. I think a more advanced approach would be to guarantee a
> > > small portion of bandwidth for the guest connection if needed.
> 
> As Van Jacobson and Katherine Nichols say in their CoDel paper, "Since
> normal cures for congestion such as usage limits or usage-based
> billing have no effect on bufferbloat but annoy customers and
> discourage network use, addressing the real problem would be prudent."

I doubt that codel makes these measures absolete.

	-Michi
-- 
programing a layer 3+4 network protocol for mesh networks
see http://michaelblizek.twilightparadox.com



More information about the Tech mailing list