[mod_python] Re: forks, daemons and other beasts

Graham Dumpleton grahamd at dscpl.com.au
Sat Feb 11 20:08:20 EST 2006

On 12/02/2006, at 11:41 AM, Daniel Nogradi wrote:

>>>>> Have you simply considered using a separate thread to do the
>>>>> unpacking?
>>>> Hmmmmm, what do you mean exactly? When I do an os.fork the whole
>>>> apache forks, you mean that I should start a brand new thread,  
>>>> outside
>>>> of apache?
>>> Using threads you wouldn't need to do an os.fork(). Simply create a
>>> new thread
>>> using the Python "threading" module inside the Apache process to  
>>> do the
>>> unpacking. You probably just need to detach the thread so it  
>>> finishes
>>> up properly
>>> and doesn't need another thread to wait on it to finish.
>>> Sorry, can't give an example right now as trying to rebuild OS on  
>>> one
>>> of my
>>> machines.
> The documentation for thread says that
> When the main thread exits, it is system defined whether the other
> threads survive. On SGI IRIX using the native thread implementation,
> they survive. On most other systems, they are killed without executing
> try ... finally clauses or executing object destructors.
> I tested this and indeed whatever is called with
> thread.start_new_thread it will die if execution reaches the end of
> the original program.

I would suggest not using the "thread" module. Use the higher level  
module. The detaching I spoke of, is to ensure you call setDaemon()  
on the
thread object before you call start() on the thread.

What happens on Apache shutdown will be an issue. Not because of what
you quote above, but because Apache uses signals to kill off child  
with it sending SIGKILL if it doesn't shutdown promptly after a  
series of
SIGTERM signals.

Because the main thread isn't the Python interpreter, what you state  
doesn't even come into play as the main Python thread will not be  
exited to
trigger the situation.

What will happen is that if a zip file is half unpacked and Apache  
decides it
has waited too long, it will simply kill the process with SIGKILL.  
Your code thus
has to be able to cope with possibility that unpacking of zipfile may  
not be
finished in extreme case. This could be handled by doing it into a  
directive and only moving to final directory in one atomic move at  
the end.
At least then you don't end up with partial results.

If the unpacking was still done in a separate processes which is  
triggered by
XML-RPC, the unpacking could at least be done serialised on order of
receipt. If that process crashes half way through, it could realise  
that it
had died and when starting up, start again consuming pending zip files.
With Apache that would be hard to do, as you have multiple child  
so who would be responsible for clearing out the pending queue on a  


More information about the Mod_python mailing list