Tom Lane wrote:
>
> Edmund Mergl <E.Mergl@bawue.de> writes:
> > ... That's the reason I
> > was talking about the strange UPDATE behavior of
> > PostgreSQL. If it can determine a specific number
> > of rows in a reasonable time, it should be able to
> > update these rows in the same time frame.
>
> Not necessarily --- this table has a remarkably large number of indexes,
> and all of them have to be updated when a tuple is replaced. So the
> amount of work is significantly greater than simply finding the tuples
> will require.
>
> As I posted later, I think that much of the problem comes from poor
> handling of equal-key cases in our btree index routines...
>
> regards, tom lane
if this is the case, these routines must be very poor.
Again some numbers:
1.000.000 rows:
- select * from bench where k100 = 30 with indeces 10 seconds without indeces 28 seconds
- update bench set k500k = k500k + 1 where k100 = 30 with indeces unknown without indeces 36 seconds
Still the poor update routines do not explain the
strange behavior, that the postmaster runs for
hours using at most 10% CPU, and all the time
heavy disk activity is observed. According to
top, there are over 80MB free Mem and the postmaster
has been started with -o -F. Hence this disk activity
can not be simple swapping.
Some more numbers:
database #rows inserts create make_sqs make_nqs index
selects updates
---------------------------------------------------------------------------- pgsql 10.000 00:24
00:09 00:16 00:25 pgsql 100.000 04:01 01:29 01:06 49:45 pgsql
1.000.000 39:24 20:49 23:42 ???
whereas the increase of elapsed time is somewhat proportional
to the number of rows, for the update-case it is rather
exponential.
Edmund
--
Edmund Mergl mailto:E.Mergl@bawue.de
Im Haldenhau 9 http://www.bawue.de/~mergl
70565 Stuttgart fon: +49 711 747503
Germany