|
Jorey Bump
list at joreybump.com
Wed Sep 14 23:39:14 EDT 2005
Jorey Bump wrote:
> Jim Gallacher wrote:
>> I don't know if this is the correct way, but I tried it and it works:
>>
>> import sys
>> mpglobal = sys.modules['mpglobal']
>>
>> def index(req):
>> req.write('Hello world\n')
>> req.write(mpglobal.foo)
>
>
> That's very interesting, because due to module caching, the results are
> the same if you simply do this:
>
> import mpglobal
>
> In the published module, when mpglobal is imported, /tmp/atest isn't
> updated and mpglobal.foo is available, indicating that one could
> initialize a db connection or variables using PythonImport, yet ensure
> the module will still work if sys.modules['mpglobal'] isn't available.
OK, this allows for a simpler test:
# mpglobal.py
# import this with PythonImport
import time
foo = time.strftime('%X %x %Z')
Published module:
# globaltest.py
# access global module without reimporting, due to module caching
import mpglobal
def index():
return mpglobal.foo
Visit:
http://host/somedir/globaltest/
This will display the time the module was originally imported. Due to
module caching, the same time will be displayed for multiple requests,
but eventually the module will be reimported as new child processes
begin, and different times will be displayed (use multiple browsers to
see the effect best). This is consistent with the documentation.
As far as I can tell, PythonImport doesn't provide any more persistence
for dynamically created objects or globals than simply importing the
module directly in a published module. The documentation implies that
this directive is mainly useful for initializing time consuming tasks
before the first request. But isn't that only true directly after
(re)starting apache? Isn't it the case that subsequent requests cause
apache to spawn more children, thus reintroducing the delay for the
first request in that child process?
|