[mod_python] Re: forks, daemons and other beasts

Daniel Nogradi nogradi at gmail.com
Sat Feb 11 20:26:06 EST 2006

> >>>>> Have you simply considered using a separate thread to do the
> >>>>> unpacking?
> >>>>
> >>>> Hmmmmm, what do you mean exactly? When I do an os.fork the whole
> >>>> apache forks, you mean that I should start a brand new thread,
> >>>> outside
> >>>> of apache?
> >>>
> >>> Using threads you wouldn't need to do an os.fork(). Simply create a
> >>> new thread
> >>> using the Python "threading" module inside the Apache process to
> >>> do the
> >>> unpacking. You probably just need to detach the thread so it
> >>> finishes
> >>> up properly
> >>> and doesn't need another thread to wait on it to finish.
> >>>
> >>> Sorry, can't give an example right now as trying to rebuild OS on
> >>> one
> >>> of my
> >>> machines.
> >
> > The documentation for thread says that
> >
> > When the main thread exits, it is system defined whether the other
> > threads survive. On SGI IRIX using the native thread implementation,
> > they survive. On most other systems, they are killed without executing
> > try ... finally clauses or executing object destructors.
> >
> > I tested this and indeed whatever is called with
> > thread.start_new_thread it will die if execution reaches the end of
> > the original program.
> I would suggest not using the "thread" module. Use the higher level
> "threading"
> module. The detaching I spoke of, is to ensure you call setDaemon()
> on the
> thread object before you call start() on the thread.
> What happens on Apache shutdown will be an issue. Not because of what
> you quote above, but because Apache uses signals to kill off child
> processes
> with it sending SIGKILL if it doesn't shutdown promptly after a
> series of
> SIGTERM signals.
> Because the main thread isn't the Python interpreter, what you state
> above
> doesn't even come into play as the main Python thread will not be
> exited to
> trigger the situation.
> What will happen is that if a zip file is half unpacked and Apache
> decides it
> has waited too long, it will simply kill the process with SIGKILL.
> Your code thus
> has to be able to cope with possibility that unpacking of zipfile may
> not be
> finished in extreme case. This could be handled by doing it into a
> temporary
> directive and only moving to final directory in one atomic move at
> the end.
> At least then you don't end up with partial results.
> If the unpacking was still done in a separate processes which is
> triggered by
> XML-RPC, the unpacking could at least be done serialised on order of
> receipt. If that process crashes half way through, it could realise
> that it
> had died and when starting up, start again consuming pending zip files.
> With Apache that would be hard to do, as you have multiple child
> processes
> so who would be responsible for clearing out the pending queue on a
> restart.

Oh, I just remembered that I set MaxRequestsPerChild 1 for apache in
order to avoid the famous import/reload problem and that was the
reason the thread died after the request was served. Now that I put it
back to some higher value it doesn't die and does exactly what I want
using thread.start_new_thread.

By the way, what's the problem with thread as opposed to threading?

More information about the Mod_python mailing list