Re: Berkeley DB... - Mailing list pgsql-hackers

From Karel Zak
Subject Re: Berkeley DB...
Date
Msg-id Pine.LNX.3.96.1000529150613.7470A-100000@ara.zf.jcu.cz
Whole thread Raw
In response to Re: Berkeley DB...  (Mike Mascari <mascarm@mascari.com>)
List pgsql-hackers
> It will be interesting to see the speed differences between the
> 100,000 inserts above and those which have been PREPARE'd using
> Karel Zak's PREPARE patch. Perhaps a generic query cache could be

My test:
postmaster:    -F -B 2000    rows:        100,000 table:        create table (data text);data:        37B for eache
line---all is in one transaction
 
native insert:        66.522sprepared insert:    59.431s        - 11% faster    

IMHO parsing/optimizing is relative easy for a simple INSERT.
The query (plan) cache will probably save time for complicated SELECTs 
with functions ...etc. (like query that for parsing need look at to system
tables). For example:
insert into tab values ('some data' || 'somedata' || 'some data');
native insert:        91.787sprepared insert:    45.077s     - 50% faster
(Note: This second test was faster, because I stop X-server andpostgres had more memory :-)

The best way for large and simple data inserting is (forever) COPY, not
exist faster way. 
pg's path(s) of query:native insert:        parser -> planner -> executor -> storageprepared insert:    parser (for
executestmt) -> executor -> storagecopy:            utils (copy) -> storage
 

> amongst other things). I'm looking forward to when the 7.1 branch
> occurs... :-)
I too.
                        Karel



pgsql-hackers by date:

Previous
From: Bruce Momjian
Date:
Subject: CVS FAQ on web page
Next
From: Bruce Momjian
Date:
Subject: Vacuum now uses AccessShareLock for analyze