[mod_python] modpython, mysqldb best paractice

Martijn Moeling martijn at xs4us.nu
Wed Jul 19 15:44:17 EDT 2006

Deron Meranda wrote:
> On 7/19/06, Martijn Moeling <martijn at xs4us.nu> wrote:
>> If mod python is running in Interpreter per directory mode, one
>> interpreter is created since all my content for mkbOK resides in / in
>> total over 1 in total over 14.000 different pages, and since we have
>> over 10.000 pageviews per day and aim for 100.000+ per day at the end
>> the year I am preparing for a second server (which my system can
>> If mod python is running in Interpreter per directive mode I can and
>> with god knows how many interpreters.
> There's probably no need for you to use multiple python interpreters
> at all.  The only advantage is that it can provide you with some
> level of isolation (but not perfect); it will not provide any
> benifits (and actually is more likely to decrease performance
> But as you're using it, you don't need more than one interpreter.  So
> avoid all the PythonInter* directives.
>> The register cleanup is clear now, since my system creates the
>> connection in the class module (the init call creates the class) I
>> have to alter that but.
> Also don't forget about try:...finally:... blocks.  That's often the
> simplest way to make sure you clean up after something.
> If the database connection is made inside your class, perhaps you
> should put a disconnect call in the class's destructor, __del__().  I
> don't know if you're using new-style classes, or traditional classes,
> but perhaps something like:
>    class db_based_service(object):
>        def __init__(self):
>            self.db = None
>        def __del__(self):
>            self.disconnect()
>        def connect(self):
>            self.db = MySQLdb.connect( ..... )
>        def disconnect(self):
>            if self.db is not None:
>                self.db.disconnect()
>                self.db = None
>        def init(self):
>            self.connect()
> Furthermore, if you're using transactions, you should make sure
> that you don't have any lingering open transactions.  If you're
> connecting and disconnecting on every request you probably don't
> need to worry quite so much.  But if you ever re-use or pool your
> database connections in the future, you may want to consider
> insuring that all your transactions get terminated at the end of
> the request.  Perhaps extending the framework to something like
>    class ..... (same as above)
>        def __init__(self):
>            # same other stuff above
>            self.in_trans = False
>        def __del__(self):
>             self.rollback()
>             self.disconnect()
>        def start_transaction(self):
>            if self.in_trans:
>                raise RuntimeError("Attempted nested transaction")
>            self.db.begin()
>            self.in_trans = True
>        def commit(self):
>            if self.in_trans:
>                db.commit()
>                self.in_trans = False
>        def rollback(self):
>            if self.in_trans:
>                db.rollback()
>                self.in_trans = False
> Of course you may want to see if too-many-connections or
> non-terminated transactions are even a problem.  Periodically
> run the mysql "show processlist" command.  Maybe even an
> occasional "show status" may be informative.
>> The system goes from normal cpu utilization to 100% within a few
>> microseconds, and it happens now and then, sometimes within a few
>> after a reboot, sometimes it runs for weeks without trouble
> Once it gets in that state will it ever eventually clear up?
> Is your system going into an I/O paging fit?  Run the command
> "vmstat 5" and watch the "so" and "bo" columns for a minute.
> "so" should stay near 0, and "bo" should have faily low numbers
> (say <30), but really you should compare it against when the
> system is running okay.
> Also run "top" and determine exactly which process(es) are
> charged with using the most cpu.
>> I tried multiple cron thingies to investigate, but even cron slows
>> so mutch that a "service httpd restart, and/or a service mysql
>> take hours to complete,
> Certainly sounds like heavy paging or swapping.
>> in fact (but keep in mind I have had no interactive access) I think
>> mysql stops responding at all even to signals. I even tried "nice" in
>> the hope that mysql could not take 100% but that was not the case and
>> slowed down the page building process (not surprised haha). Even
>> installing a second CPU did not help.
>> The even more stupid thing is that this behavior does not happen on a
>> PIII 1 Ghz with a excact copy of the HDD (dd if=/dev/hd1 of=/dev/hd2)
>> Since the cpu in our production machine is 64 bit I suspected that,
>> build apache, and mod_python and python all from scratch,.. no luck.
> What about the amount of memory.  That can have an even bigger
> impact than the speed of the CPU.
>> Different mysql versions did not matter too.
>> The oddest thing is that after an update of my python code on the
>> (new release of my system) is takes 1 or 2 days before it happens,
>> it takes say 4 or 5 days, next it runs ok for weeks.
> Perhaps you've got some suboptimal SQL.  For instance are you
> doing a lot of sorts, or very large joins?
> Also what MySQL storage engine are you using?  InnoDB, or
> MyISAM, etc?

Also, I'm not sure if you told us anything about your OS, apache version
and mpm (prefork or worker) or mod_python version.

The fact is the mod_python has memory leaks. We are tracking them down -
3.2.8 is better than 3.1.4, and the next stable release will be better
yet. (Fixed a leak util.parse_qsl used in FieldStorage). Having
mod_python leak memory could impact mysql, causing heavy swapping when
trying to build a query result. Just a WAG and another thing to



Thanx Jim, I did not know about the memory leaks, But you have a point
on the mysql behaviour, one of the implications of my system is lots of
so the memory hunger of mysql seems justified.

I think I have mod_python 3.2.8, I will have a look

Considering memory leaks, do you think, altering the minserver,
maxserver, spare and alike parameters configuring apache, If there are
memory leaks killing the app in some automatic way does not clog up log
files etc. I rather have the overhead of process creation than then a
down server when I am windsurfing.  

Well this is the first time I really use open source for a development
project, although I have been heavily involved in the first real Linux
kernel, I saw possible trouble like this lurking around the corner.

what a laugh we had back in those days. I remember driving to my friend
with my development system attached to a small UPS which happened to be
the same size as my computer casing, keeping it running in the middle of
a debug session since I finally reproduces a serious memory leak in the
part of the TCP/IP stack we were developing, I guess you have a system
around running my code for doing a lot of hard work, sending and
receiving network information, and are very happy that leak finally
plugged with a decent piece of code, I know it is a frustrating process,
I have been there.

I set myself to fix this issue but found a pile of possibilities, thanx

I did not name the versions I run for a few reasons

The most important one: I know updates are solving problems, but people
seem to forget that they almost always introduce a few new ones, 

I compute the following code:

for word in release_notes:
	if word in ["better","increased","quicker","you get the point"]:

I first really try to investigate the problem and not try to solve what
I do not understand. Simply sending version numbers results in TRY that
new version.... I can go in deeper if I feel I need to, to help the
community solving things. (this is not a promise, I am really busy at
the moment)

I have another reason which is nice to know, I have some experience with
these noisy boxed not doing what I am asking, with that comes feelings,
and my feelings are telling me that I am wrong somewere.

Anyway, I'll report stuff


More information about the Mod_python mailing list