Greg Stein
gstein at lyra.org
Fri May 26 10:37:22 EST 2000
On Fri, 26 May 2000, Gregory Trubetskoy wrote: > OK, here is the deal - > > exec "import " + module_name > module = eval(module_name) > > is not the same as module = __import__(module_name), but rather: > > module = __import__(module_name) > list = string.split(module_name, ".") > for n in list[1:]: > module = getattr(module, n) > > My quick tests on which is faster where inconclusive. The former is > less lines of code and is (IMHO) simpler to read, the latter is definitely > more secure since it doesn't have the exec or eval. Mine were quite conclusive. The latter code is 4 to 6 times faster. I've attached my timing script. Here is the output of a couple runs: [gstein at kurgan python]$ ./time-eval.py 16.6380729675 2.65576100349 6.2648984399 22.0597770214 5.24019098282 4.20972767858 [gstein at kurgan python]$ ./time-eval.py 16.5899840593 2.59417903423 6.39508061719 21.954955101 5.28538203239 4.15390126323 Note that each run already loops through 10000 times, so the numbers above should be quite stable. Yes, I understand that the first import is longer than the following imports. And that I'm not testing the cost of that first import. This is intended: the underlying import mechanism which runs on that first import is the same for both (e.g. the parse/load and module setup), so it can be considered a constant factor and omitted. Much of this may be moot anyhow since the "import" logic in question occurs at startup only. But I don't think the use of exec/eval should ever be encouraged :-) (performance and security) Cheers, -g -- Greg Stein, http://www.lyra.org/ -------------- next part -------------- #!/usr/bin/env python import time import string def do_timing(m): l = xrange(10000) t = time.time() for i in l: exec "import " + m mod = eval(m) t1 = time.time() - t t = time.time() for i in l: mod = __import__(m) for p in string.split(m, '.')[1:]: mod = getattr(mod, p) t2 = time.time() - t print t1, t2, t1/t2 do_timing('httplib') do_timing('xml.dom.core')
|