Reining in Bandwidth With Squid Proxying
No matter how fat your incoming Internet connection, someone will always find a way to hog it and leave the rest of your users wishing for faster methods of communication, like carrier pigeons, or messages in bottles. Having an acceptable use policy is the first step; after that, you are justified in beating offenders with sticks. SWatch Reader Favorite! Network hogs gobbling too much bandwidth? Consider throttling them with the squid caching proxy.
Real-life example: A friend had a boss who spent all day surfing porn. The good news was it kept him out of the way. The bad news was his porn surfing saturated their 256k DSL, so the actual business of the company was impaired. (Actual work, what a concept.) So my friend implemented Squid's delay pools, throttling the boss to a bare minimum. My friend cannily blamed increased sales and business activity, and got the boss to authorize a dedicated T1. So everyone finally got the bandwidth they needed.
(For those of you going "OMG why didn't he tell human resources, or confront the boss, or call the cops, or something" all I can say is, you weren't there. So don't ask.)
Squid Throttles Hogs
The Squid http proxy/caching server has an ingenious feature called delay pools. The excellent O'Reilly book "Squid: The Definitive Guide" calls them "bandwidth buckets," which is a pretty good analogy. You, the ace admin, configure so much maximum available bits per second. This allows users to "save up" bandwidth if they don't use the maximum, and it makes some burst speeds available. When a burst empties the "bucket," they're limited to the fill rate. So it rewards thrifty users, and puts the brakes on hogs.
The bad news: If your Squid proxy was not compiled with --enable-delay-pools, you will have to recompile and reinstall it. The other bad news: Using Squid's delay pools, which operate at the application layer, is not as precise as using something that operates at the transport layer, like tc, which is part of iproute2. The delay pools operate on bytes per second, not packets. The good news is it's a whole lot simpler to use, especially if you already use Squid.
There are three types of buckets:
- Class 1 pool: A single aggregate bucket, shared by all users
- Class 2 pool: One aggregate bucket, 256 individual buckets
- Class 3 pool: One aggregate bucket, 256 network buckets, 65,536 individual buckets
One common gotcha is getting confused on bucket sizes. Clients are limited by the size of the smallest bucket, so you don't want to make the aggregate bucket smaller than its downstream buckets.
Now let the fun begin. squid.conf is where our exciting delay pool configuration takes place.
- delay_pools defines how many pools we want to use.
- delay_class tells which type of pool is being used.
- delay_parameters sets our restrictions, fill rate/maximum bucket size.
This is what a simple configuration looks like:
########Delay Pools######### # a simple global throttle, users sharing 256 Kbit/s delay_pools 1 delay_class 1 1 # 256 Kbit/s fill rate, 1024 Kbit/s reserve delay_parameters 1 32000/128000 acl All src 0/0 delay_access 1 allow All
The delay_parameters values are bytes, so if you're used to measuring bandwidth speed in bits per second, remember to divide bits by 8.
acl All src 0/0 creates an access rule named All, and it includes the entire IP range.
delay_access 1 allow All tells which requests go through which pools.
This configuration places no limitations on individual users; all users share the same bucket. During idle times, Squid will "refill" the bucket, allowing greater-than-256 Kbit/s speed, until the 1024 Kbit/s "reserve" is consumed. Then users are limited to sharing the 256 Kbit/s "fill" rate. You might use this to reserve bandwidth for other applications on an overburdened link. For example, if you have an important application, mail, or Web server that needs a little elbow room, route all your Web surfin' slackers through Squid, and let your servers roam free.
This article was originally published ServerWatch on May 21, 2004. Prior to that it was published on Enterprise Networking Planet.