Tom Lane wrote:
>
> Edmund Mergl <E.Mergl@bawue.de> writes:
> > The table is filled with 1.000.000 rows of random data
> > and on every field an index is created.
>
> BTW, do you happen to know just how random the data actually is?
> I noticed that the update query
> update bench set k500k = k500k + 1 where k100 = 30;
> updates 10,000 rows. If this "random" data actually consists of
> 10,000 repetitions of only 100 distinct values in every column,
> then a possible explanation for the problem would be that our
> btree index code isn't very fast when there are large numbers of
> identical keys. (Mind you, I have no idea if that's true or not,
> I'm just trying to think of likely trouble spots. Anyone know
> btree well enough to say whether that is likely to be a problem?)
>
> regards, tom lane
the query:
update bench set k500k = k500k + 1 where k100 = 30;
affects about 10.000 rows. This can be determined by running
the query:
select k500k from bench where k100 = 30;
which takes about half a minute. That's the reason I
was talking about the strange UPDATE behavior of
PostgreSQL. If it can determine a specific number
of rows in a reasonable time, it should be able to
update these rows in the same time frame.
Edmund
--
Edmund Mergl mailto:E.Mergl@bawue.de
Im Haldenhau 9 http://www.bawue.de/~mergl
70565 Stuttgart fon: +49 711 747503
Germany