[mod_python] Re: mod_python 3.2.3b available for testing (util.py and file uploads)

Mike Looijmans nlv11281 at natlab.research.philips.com
Tue Nov 1 02:33:38 EST 2005

Nick wrote:
> Mike Looijmans wrote:
>> Nick wrote:
>>> Mike Looijmans wrote:
>>>> Nick wrote:
>>>>> So that's my problem, or at least that's where the conversation has 
>>>>> led me.  Is there an easy way to figure out what you've got other 
>>>>> than process of elimination?
>>>> Why not use:
>>>> if hasattr(file, 'filename'):
>>>>     ...
>>>> The FieldStorage only adds the filename attribute to the file object 
>>>> if the 'filename' header was present in the corresponding POST 
>>>> chunk. This is also the trigger used internally to determine whether 
>>>> it's a file.
>>> That will always evaluate to True.  filename is set to the file name 
>>> provided in the content-disposition IF the browser set one, which is 
>>> not required by the protocol.  Otherwise it gets set to None.  A 
>>> filename of None does not necessarily mean that it's not a file, just 
>>> that none was given in the content-disposition.
>> You're totally right - I forgot how enthusiastically I hacked the 
>> util.py file, and removed the filename attr.
>> A check on "filename is not None" should be OK. If the browser did not 
>> send a filename, the tempfile routine will also not be triggered, so 
>> that the test we currently use ("typeof(Filetype)" and derivatives) 
>> also fails to recognize it as a file.
>> What strikes me as weird is that the module parses the request, draws 
>> the right conclusions, but somewhere along the way forgets about it 
>> and then has to go back to figure things out.
>> I think it would be more logical (from an OO perspective), to make the 
>> StringField resemble Field in ALL aspects (add the 'name', 'file' and 
>> other attributes to it), and add it to the internal item list of 
>> FieldStorage. The __getitem__ method(s) can then simply return the 
>> item, and don't need to create the StringField object.
>> I'll hack some more, see how it turns out.
> I agree with you on that, although it is possible to get a file upload 
> without a filename -- that's not against spec.  So if the code does 
> indeed ignore the content if no filename is set, that would be wrong.

I've been ill for a few days, so it took a while to get back.

I've attached a util.py for mod_python  that allows me to upload files 
many times larger than the system memory, but apache only consumes a few 
MB RAM when posting a few GB as file uploads.

As far as I can see, this does not break compatibility with existing 

The code is simpler, and probably faster too (especially if a 
StringField is referred multiple times in a script).

Calling req.readline() without a limit seems to cause Apache (2.0.55 on 
Windows) to read the whole POST request into system RAM. This even 
happens during header parsing, so i supplied a 10kB limit there as wel 
(a Content-Type header of 10kB in size sounds more like an attack than a 
sane request to me).

I'm looking for some volunteers to see if this util.py breaks their scripts.

Mike Looijmans
Philips Natlab / Topic Automation
-------------- next part --------------
 # Copyright 2004 Apache Software Foundation 
 # Licensed under the Apache License, Version 2.0 (the "License"); you
 # may not use this file except in compliance with the License.  You
 # may obtain a copy of the License at
 #      http://www.apache.org/licenses/LICENSE-2.0
 # Unless required by applicable law or agreed to in writing, software
 # distributed under the License is distributed on an "AS IS" BASIS,
 # implied.  See the License for the specific language governing
 # permissions and limitations under the License.
 # Originally developed by Gregory Trubetskoy.
 # $Id: util.py 102649 2004-02-16 19:47:28Z grisha $

import _apache
from mod_python import apache
import cStringIO
import tempfile

from types import *
from exceptions import *

parse_qs = _apache.parse_qs
parse_qsl = _apache.parse_qsl

""" The classes below are a (almost) a drop-in replacement for the
    standard cgi.py FieldStorage class. They should have pretty much the
    same functionality.

    These classes differ in that unlike cgi.FieldStorage, they are not
    recursive. The class FieldStorage contains a list of instances of
    Field class. Field class is incapable of storing anything in it.

    These objects should be considerably faster than the ones in cgi.py
    because they do not expect CGI environment, and are
    optimized specifically for Apache and mod_python.

class Field:

    filename = None
    headers = {}

    def __init__(self, name):
        self.name = name

    def __repr__(self):
        """Return printable representation."""
        return "Field(%s, %s)" % (`self.name`, `self.value`)

    def __getattr__(self, name):
        if name != 'value':
            raise AttributeError, name
        if self.file:
            self.value = self.file.read()
            self.value = None
        return self.value

    def __del__(self):

class StringField(str):
    """ This class is basically a string with
    added attributes for compatibility with std lib cgi.py. Basically, this
    works the opposite of Field, as it stores its data in a string, but creates
    a file on demand. Field creates a value on demand and stores data in a file.
    filename = None
    headers = {}
    ctype = "text/plain"
    type_options = {}
    disposition = None
    disp_options = None
    # I wanted __init__(name, value) but that does not work (apparently, you
    # cannot subclass str with a constructor that takes >1 argument)
    def __init__(self, value):
        '''Create StringField instance. You'll have to set name yourself.'''
        str.__init__(self, value)
        self.value = value

    def __getattr__(self, name):
        if name != 'file':
            raise AttributeError, name
        self.file = cStringIO.StringIO(self.value)
        return self.file

class FieldStorage:
    def __init__(self, req, keep_blank_values=0, strict_parsing=0):

        self.list = []

        # always process GET-style parameters
        if req.args:
            pairs = parse_qsl(req.args, keep_blank_values)
            for pair in pairs:
                self.add_field(pair[0], pair[1])

        if req.method == "POST":

                clen = int(req.headers_in["content-length"])
            except (KeyError, ValueError):
                # absent content-length is not acceptable
                raise apache.SERVER_RETURN, apache.HTTP_LENGTH_REQUIRED

            if not req.headers_in.has_key("content-type"):
                ctype = "application/x-www-form-urlencoded"
                ctype = req.headers_in["content-type"]

            if ctype == "application/x-www-form-urlencoded":
                pairs = parse_qsl(req.read(clen), keep_blank_values)
                for pair in pairs:
                    self.add_field(pair[0], pair[1])

            elif ctype[:10] == "multipart/":

                # figure out boundary
                    i = ctype.lower().rindex("boundary=")
                    boundary = ctype[i+9:]
                    if len(boundary) >= 2 and boundary[0] == boundary[-1] == '"':
                        boundary = boundary[1:-1]
                    boundary = "--" + boundary
                except ValueError:
                    raise apache.SERVER_RETURN, apache.HTTP_BAD_REQUEST

                #read until boundary
                # ML: req.readline without any limit seems to let my
                # apache 2.0.55 comsume the whole request at once, and
                # may fail with a memory error
                line = req.readline(10240)
                while line and not line.startswith(boundary):
                    line = req.readline(10240)

                while 1:

                    ## parse headers
                    ctype, type_options = "text/plain", {}
                    disp, disp_options = None, {}
                    headers = apache.make_table()

                    line = req.readline(10240)
                    if len(line) == 10240:
                        raise "Too big", "bad header in multipart post?"
                    sline = line.strip()
                    if not line or sline == (boundary + "--"):
                    while line and line not in ["\n", "\r\n"]:
                        h, v = line.split(":", 1)
                        headers.add(h, v)
                        h = h.lower()
                        if h == "content-disposition":
                            disp, disp_options = parse_header(v)
                        elif h == "content-type":
                            ctype, type_options = parse_header(v)
                        line = req.readline(10240)
                        if len(line) == 10240:
                            raise "Too big", "bad header in multipart post?"
                        sline = line.strip()

                    if disp_options.has_key("name"):
                        name = disp_options["name"]
                        name = None

                    # create a file object
                    file = self.make_file(disp_options)

                    # read it in
                    self.read_to_boundary(req, boundary, file)

                    # make a Field
                    if disp_options.has_key("filename"):
                        field = Field(name)
                        field.filename = disp_options["filename"]
                        field = StringField(file.read())
                        field.name = name
                    field.file = file
                    field.type = ctype
                    field.type_options = type_options
                    field.disposition = disp
                    field.disposition_options = disp_options
                    field.headers = headers


                # we don't understand this content-type
                raise apache.SERVER_RETURN, apache.HTTP_NOT_IMPLEMENTED

    def add_field(self, key, value):
        """Insert a field as key/value pair"""
        item = StringField(value)
        item.name = key

    def make_file(self, disp_options):
        """Create a file obejct for the given disp_options. You can override
        this method to avoid temp file creation and stream directly. The
        returned file must at least support write(data) and seek(0)."""
        if disp_options.has_key("filename"):
            return tempfile.TemporaryFile("w+b")
            return cStringIO.StringIO()

    def skip_to_boundary(self, req, boundary):
        line = req.readline(10240)
        while line and not line.startswith(boundary):
            line = req.readline(10240)

    def read_to_boundary(self, req, boundary, file):
        delim = ""
        line = req.readline(10240)
        while line and not line.startswith(boundary):
            odelim = delim
            if line[-2:] == "\r\n":
                delim = "\r\n"
                line = line[:-2]
            elif line[-1:] == "\n":
                delim = "\n"
                line = line[:-1]
                delim = ""
            file.write(odelim + line)
            line = req.readline(10240)

    def __getitem__(self, key):
        """Dictionary style indexing."""
        if self.list is None:
            raise TypeError, "not indexable"
        found = []
        for item in self.list:
            if item.name == key:
        if not found:
            raise KeyError, key
        if len(found) == 1:
            return found[0]
            return found

    def get(self, key, default):
            return self.__getitem__(key)
        except KeyError:
            return default

    def keys(self):
        """Dictionary style keys() method."""
        if self.list is None:
            raise TypeError, "not indexable"
        keys = []
        for item in self.list:
            if item.name not in keys: keys.append(item.name)
        return keys

    def has_key(self, key):
        """Dictionary style has_key() method."""
        if self.list is None:
            raise TypeError, "not indexable"
        for item in self.list:
            if item.name == key: return 1
        return 0

    __contains__ = has_key

    def __len__(self):
        """Dictionary style len(x) support."""
        return len(self.keys())

    def getfirst(self, key, default=None):
        """ return the first value received """
        for item in self.list:
            if item.name == key:
                return item
        return default
    def getlist(self, key):
        """ return a list of received values """
        if self.list is None:
            raise TypeError, "not indexable"
        found = []
        for item in self.list:
            if item.name == key:
        return found

def parse_header(line):
    """Parse a Content-type like header.

    Return the main content-type and a dictionary of options.

    plist = map(lambda a: a.strip(), line.split(';'))
    key = plist[0].lower()
    del plist[0]
    pdict = {}
    for p in plist:
        i = p.find('=')
        if i >= 0:
            name = p[:i].strip().lower()
            value = p[i+1:].strip()
            if len(value) >= 2 and value[0] == value[-1] == '"':
                value = value[1:-1]
            pdict[name] = value
    return key, pdict

def apply_fs_data(object, fs, **args):
    Apply FieldStorage data to an object - the object must be
    callable. Examine the args, and match then with fs data,
    then call the object, return the result.

    # add form data to args
    for field in fs.list:
        if field.filename:
            val = field
            val = field.value
        args.setdefault(field.name, []).append(val)

    # replace lists with single values
    for arg in args:
        if ((type(args[arg]) is ListType) and
            (len(args[arg]) == 1)):
            args[arg] = args[arg][0]

    # we need to weed out unexpected keyword arguments
    # and for that we need to get a list of them. There
    # are a few options for callable objects here:

    if type(object) is InstanceType:
        # instances are callable when they have __call__()
        object = object.__call__

    expected = []
    if hasattr(object, "func_code"):
        # function
        fc = object.func_code
        expected = fc.co_varnames[0:fc.co_argcount]
    elif hasattr(object, 'im_func'):
        # method
        fc = object.im_func.func_code
        expected = fc.co_varnames[1:fc.co_argcount]
    elif type(object) is ClassType:
        # class
        fc = object.__init__.im_func.func_code
        expected = fc.co_varnames[1:fc.co_argcount]

    # remove unexpected args unless co_flags & 0x08,
    # meaning function accepts **kw syntax
    if not (fc.co_flags & 0x08):
        for name in args.keys():
            if name not in expected:
                del args[name]

    return object(**args)

def redirect(req, location, permanent=0, text=None):
    A convenience function to provide redirection

    if req.sent_bodyct:
        raise IOError, "Cannot redirect after headers have already been sent."

    req.err_headers_out["Location"] = location
    if permanent:
        req.status = apache.HTTP_MOVED_PERMANENTLY
        req.status = apache.HTTP_MOVED_TEMPORARILY

    if text is None:
        req.write('<p>The document has moved' 
                  ' <a href="%s">here</a></p>\n'
                  % location)

    raise apache.SERVER_RETURN, apache.OK

More information about the Mod_python mailing list