Marc Tardif wrote:
> > > I'm writing a search engine using python and postgresql which requires to
> > > store a temporary list of results in an sql table for each request. This
> > > list will contain at least 50 records and could grow to about 300. My
> > > options are either to pickle the list and store a single entry or use the
> > > postgresql COPY command (as opposed to INSERT which would be too slow) to
> > > store each of the temporary records.
> > >
> >
> > > You are writing a search engine : does that mean that you need to search
> > > the
> > > web and that you want to store your temporary results in a table, OR
> > > does that mean that you are writing a QUERY screen, from which you
> > > generate a SELECT statement to query your POSTGRES database ?
> > >
> > > Also what size are your tuples ?
> > >
> > > Do you need these temporary results within the same program, or do you
> > > need to pass them somewhere to another program ?
>
> The former, search the web and store temporary results in a table. As for
> the tuples, I can expect each to be <100bytes. Finally, the temporary
> results will only be used by the same program.
>
If your temporary results ARE really to be used by the same program, then
I suggest that you use a solution whereby you keep your temp results
in a datastructure in memory, and not write them to any table or
temporary file. Python has enough basic and extended datastructures to do
that.
If your tuplesize is 100 bytes and you are sure that you have a maximum
of 300 tuples, then you will spend approximately 30 Kb of memory (not
counting run-time overhead). Using a simple list to store your data
will simplify your life much, and you don't need to worry about memory
management.
Good luck.
Jurgen Defurne
defurnj@glo.be