- 1 VMware Takes the Wraps Off vRealize Automation and vRealize Business
- 2 Microsoft Previews Hyper-V Containers for Windows Server 2016
- 3 Mirantis Led FUEL Project Gets Installed Under OpenStack Big Tent
- 4 Red Hat Enterprise Linux 7.2 Adds Security, DR Features
- 5 Docker Reaches Across Universes at Dockercon EU
Filtering I/O in Apache 2.0 Page 2
This bucket type references a file on the disk. When reading from this bucket, a new bucket is created in front of the file bucket. The data that was read from the file is stored in the new bucket. This is done so that we only read from the file once. When writing to the network, we determine how much of the data can be sent using sendfile.
This bucket references MMAPed files. The data is treated much like heap buckets, except the data can not be modified in the bucket. If the data needs to be modified, a heap bucket must be created and the data must be copied into that bucket.
This bucket type is a generic bucket. Any data type is valid in this bucket, but the data must be managed some external entity. This is designed for data that a module will create and destroy. Perhaps tha best way to describe this is with an example. Mod_mmap_static keeps a cache of mmap'ed files available to increase the performance of Apache. The mmap entities would be immortal buckets. Mod_mmap_static is in charge of creating and destroying the mmaps, the immortal buckets just reference it.
This bucket references data allocated out of a pool. Pool data is garaunteed to be available as long as the pool is available. When this bucket is created, a cleanup is registered so that when the pool is cleared, if the data is still required the bucket is converted into a heap bucket.
This bucket references a pipe. Pipes are interesting, because they destroy themselves as they are read. This means that if I have a pipe and I read data from it, I must save that data someplace or it will be lost. To accomplish this, when pipe buckets are read, a second bucket is created in front of the current bucket. The new bucket is used to store the data read from the pipe. This is very similar to file buckets, except that sendfile can't be used with pipes. Pipe buckets are most commonly used to return data from CGI scripts.
This bucket does not contain any data. It signals filters that there is will be no more data generated for consumption. This tells filters that this is the final time they will be called, so they need to send any data that they have saved in previous calls
All buckets include pointers to their accessor functions. The details of what these functions do is specific to each bucket type, but we can explain them generally.
- read returns a pointer to the data stored in the bucket and the amount of data returned. Depending on the type of bucket read from, the data can be modified in place.
- split takes one bucket and splits it in two at the specified offset into the bucket data.
- setaside converts a bucket from one type to another. The purpose of this function pointer is to ensure that the data is still available on the next call to the filter function. If the filter must set data aside, then it should loop through the bucket brigade and call the setaside function for any bucket that has one.
- destroy destroys the current bucket and any data that it references and has the rights to destroy. For example, destorying an immortal bucket just destroys the bucket, but leaves the data alone.
There are just a few more concepts that we need to cover in this overview of filtering. The first is registering a filter with the server. This is done with ap_register_filter.
void ap_register_filter(const char *name,