Tom Lane wrote:
>
> Edmund Mergl <E.Mergl@bawue.de> writes:
> > When loading 100.000 rows into the table
> > everything works ok. Selects and updates
> > are reasonable fast. But when loading
> > 1.000.000 rows the select statements still
> > work, but a simple update statement
> > shows this strange behavior.
>
> Can you provide a script or something to reproduce this behavior?
>
> There are a number of people using Postgres with large databases
> and not reporting any such problem, so I think there has to be some
> special triggering condition; it's not just a matter of things
> breaking at a million rows. Before digging into it, I'd like to
> eliminate variables like whether I have the right test case.
>
> regards, tom lane
the original benchmark can be found at
ftp://ftp.heise.de/pub/ix/benches/sqlb-21.tar
for a stripped-down version see the attachment.
For loading the database and running the first
and second part (selects and updates) just do
the following:
createdb test
./make_wnt 1000000 pgsql >make.out 2>&1 &
This needs about 700 MB of diskspace.
On a PII-400 it takes about 40 minutes to
load the database, 20 minutes to create the indeces
and 20 minutes to run the first part of the
benchmark (make_sqs). For running the benchmark
in 20 minutes (without swapping) one needs 384 MB RAM.
The second part (make_nqs) contains update
statements which can not be performed properly
using PostgreSQL.
For testing it is sufficient to initialize the
database and then to perform a query like
update bench set k500k = k500k + 1 where k100 = 30
Edmund
--
Edmund Mergl mailto:E.Mergl@bawue.de
Im Haldenhau 9 http://www.bawue.de/~mergl
70565 Stuttgart fon: +49 711 747503
Germany