yubing
trueice at gmail.com
Wed Jul 4 00:00:49 EDT 2007
quite right, thanks a lot:) Then is the connection object suitable for my case? It seems request based handling is just for short connections, and conn_write() has its own pool mgmt code: if (len) { buff = apr_pmemdup(c->pool, PyString_AS_STRING(s), len); bb = apr_brigade_create(c->pool, c->bucket_alloc); b = apr_bucket_pool_create(buff, len, c->pool, c->bucket_alloc); APR_BRIGADE_INSERT_TAIL(bb, b); /* Make sure the data is flushed to the client */ b = apr_bucket_flush_create(c->bucket_alloc); APR_BRIGADE_INSERT_TAIL(bb, b); ap_pass_brigade(c->output_filters, bb); } On 7/4/07, Graham Dumpleton <graham.dumpleton at gmail.com> wrote: > > On 04/07/07, Graham Dumpleton <graham.dumpleton at gmail.com> wrote: > > On 04/07/07, Graham Dumpleton <graham.dumpleton at gmail.com> wrote: > > > On 03/07/07, yubing <trueice at gmail.com> wrote: > > > > Anyhow, it's clear that ap_rflush is the root cause of this memory > leak, > > > > maybe we should find a new API for this (maybe we should also add a > new > > > > method to the request_object ). > > > > > > It is not just ap_rflush() but also ap_rwrite(), as it is also using > > > the request pool. > > > > > > To avoid the problem would mean creating a separate pool just for the > > > one operation and for the buckets to be allocated from that, with the > > > pool destroyed at the end of the call. > > > > > > Unfortunately it isn't perhaps quite as simple as that though. This is > > > because one can only use a separate pool if you know the buckets would > > > be used up and no longer required after the call. When doing a flush > > > this may be the case, but need to check. Definitely not the case if > > > not doing a flush though. > > > > Okay, I'm confused now. The bucket functions aren't supposed to be > > allocating memory out of the resource pool. > > > > /** > > * Create a new bucket brigade. The bucket brigade is originally empty. > > * @param p The pool to associate with the brigade. Data is not > allocated out > > * of the pool, but a cleanup is registered. > > * @param list The bucket allocator to use > > * @return The empty bucket brigade > > */ > > APU_DECLARE(apr_bucket_brigade *) apr_brigade_create(apr_pool_t *p, > > apr_bucket_alloc_t > *list); > > > > Ie., it takes buckets from a bucket list, to which buckets are > > returned for reuse when the bucket brigade is no longer needed at the > > completion of the operation. > > > > The only reason the request pool is required is so that the bucket > > brigade can be cleaned up automatically if the code forgets to > > explicitly destroy it before the request is over. > > > > So, looks like I have some more research to do to understand all this. > :-( > > Understand now. The buckets themselves are reused, but the bucket > brigade object is not. Thus for each ap_rwrite()/ap_rflush() memory > will increase still because of the bucket brigade object. What is also > odd is that the bucket brigade isn't even notionally destroyed, > meaning that the cleanup handler isn't killed off and would appear to > hang around until the end of the request at which point it would be > called. If you have lots of bucket brigade objects created, that could > be a lot of cleanup handlers needing to be killed. > > Anyway, I will not post any more on this. I'll work out a solution for > mod_wsgi and then create a JIRA issue for mod_python describing what > might be done for mod_python when we get around to doing more changes > to it. > > Graham > -- truly yours ice -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mm_cfg_has_not_been_edited_to_set_host_domains/pipermail/mod_python/attachments/20070704/cde51588/attachment-0001.html
|