[mod_python] global var on 2.7

Jorey Bump list at joreybump.com
Sun Jul 10 20:52:38 EDT 2005


Graham Dumpleton wrote:
> 
>>> I'd do this, probably:
>>>
>>> # CacheBigdic.py
>>>
>>> bigdic = looong_and_painful_process()
>>>
>>> # publishedmodulewithuniquename.py
>>>
>>> import CacheBigdic
>>> def index(req):
>>>     ...
>>>     use(CacheBigdic.bigdic)
>>>     ...
> 
> This way of doing things can be problematic and may not be advisable.
> 
> The problem is that since the long and painful process is done at time
> of import (while the thread holds the global import lock) in a threaded
> MPM, you will block every request in the process which might want to
> import a module. Thus, if it took ten seconds, in the worse case when
> Apache process is just started, all requests could be stalled for ten
> seconds.

Threaded MPM is not used by apache 1.3.x, which was the reason for the 
original post.

> You are much better off doing it the first time you need it as you
> were doing. Doing it only the first time it is required also enables
> you to more easily return an error response if it fails. Subsequent
> requests which try again automatically to initiate the action.

True, perhaps, now that we know it's a readonly object, in which case 
making it global is kind of a moot point. Anyway, I did say this was 
only semipersistent (in the sense that different processes won't share 
the exact same object). Without knowing the details of the application, 
it's hard to say if it benefits from a semipersistent cache or implicit 
creation/destruction for each request. I'll side with you here, although 
the OP would probably do well to consider a database with connection 
pooling of some sort.

>>> If you find errors appearing in your apache log under heavy load, you
>>> may need to alter CacheBigdic.py:
>>>
>>> try:
>>>     bigdic = looong_and_painful_process()
>>> except TheErrorYouSee:
>>>     bigdic = looong_and_painful_process()
>>>
>>> It looks redundant, but it's needed because the module is cached.
> 
> And if it fails the second time as well? 

The error is explicit, so I'd say go back to the drawing board and fix 
the real problem. Sometimes catching the exception once is enough 
('MySQL server has gone away' errors, for example). As long as it's not 
a crutch, catching exceptions has its place.

> At some point you would have
> to give up and the only way to reinitiate the action is to reload the
> module with no means of returning an customised error response as to
> what went wrong. Thus doing it at the time of the request is again
> preferable to do it at the time of import.

It really all depends on what looong_and_painful_process() does, what 
exceptions are raised, what type of data sharing is required, 
dependencies on other daemons or processes, how much memory, etc... If 
an object is too fragile to initialize at time of import, that doesn't 
necessarily mean it will be robustly created at the time of the request, 
either.



More information about the Mod_python mailing list