On Tue, Aug 31, 2004 at 11:11:02AM -0700, Ron St-Pierre wrote:
> We have a web based application with data that is updated daily. The
> biggest bottleneck occurs when we try to update
> one of the tables. This table contains 58,000 rows and 62 columns, and
> EVERY column is indexed.
That is usually a very bad idea; for every update, PostgreSQL has to update
62 indexes. Do you really do queries on all those 62 columns?
> A typical update is:
> UPDATE dataTable SET field01=44.5, field02=44.5, field03='Bob',
> field04='foo', ... , field60='2004-08-30', field61='2004-08-29'
> WHERE id = 1234;
That looks like poor database normalization, really. Are you sure you don't
want to split this into multiple tables instead of having 62 columns?
> Other notables:
> The UPDATE is run from a within a function: FOR rec IN SELECT ...LOOP
> RETURN NEXT rec; UPDATE dataTable.....
> Postgres 7.4.3
> debian stable
> 2 GB RAM
> 80 DB IDE drive (we can't change it)
Are you doing all this in multiple transactions, or in a sngle one? Wrapping
the FOR loop in a transaction might help speed.
/* Steinar */
--
Homepage: http://www.sesse.net/