Alexis Iglauer
aiglauer at yahoo.com
Tue Nov 14 02:45:56 EST 2000
First of all, thank you to everyone for the pointers. > I used gd with nsapy on a windows machine in 1997 Will check it out, thanks.... > The things that are the slowest are repetetive > operations that involve > memory allocations - e.g. if you're building some > large list from the > ascii files one byte at a time and your ascii file > is 90K, it will be > slow. You will see a big improvement if you only > build the list once and > cache it inside a module somewhere. > I am using the following construct often: i = open (datafile) l = i.readlines() i.close() on 200KB+ files - looks like that could be the problem :) Is it more efficient to loop using for l in i.readlines(): ? My datafiles are time series data, usually for 24 hours at a time, so I will have a whole bunch of datafiles which make up a contiguous set of data over several days/months/years. The last datafile will probably always be growing. A two-stage caching module may do the trick. > If your content is dynamic and cannot be cached, > then try to avoid things > like readlines() of the whole file, then copying it > some list. A much > faster solution is to read the file as you need the > data avoiding all > unnecessary mallocs. This seems to answer my question above in the positive. I will however still be reading all my datafiles every time. > Python "import" mechanism - make a module that reads > all your ascii files, > then import it from a script. Python only imports > once the first time and > ignores all subsequent requests - perfect for > caching. Does "first time" here mean the first time apache is started or the first time an apache process is (re)started? > HTH Yup, thanks aei __________________________________________________ Do You Yahoo!? Yahoo! Calendar - Get organized for the holidays! http://calendar.yahoo.com/
|