Re: query performance question - Mailing list pgsql-performance

From Dan Harris
Subject Re: query performance question
Date
Msg-id 4848098A.5070500@drivefaster.net
Whole thread Raw
In response to Re: query performance question  (tv@fuzzy.cz)
Responses Re: query performance question  (Kenneth Marshall <ktm@rice.edu>)
List pgsql-performance
tv@fuzzy.cz wrote:
>
> 3) Build a table with totals or maybe subtotals, updated by triggers. This
> requires serious changes in application as well as in database, but solves
> issues of 1) and may give you even better results.
>
> Tomas
>
>
I have tried this.  It's not a magic bullet.  We do our billing based on
counts from huge tables, so accuracy is important to us.  I tried
implementing such a scheme and ended up abandoning it because the
summary table became so full of dead tuples during and after large bulk
inserts that it slowed down selects on that table to an unacceptable
speed.  Even with a VACUUM issued every few hundred inserts, it still
bogged down due to the constant churn of the inserts.

I ended up moving this count tracking into the application level.  It's
messy and only allows a single instance of an insert program due to the
localization of the counts in program memory, but it was the only way I
found to avoid the penalty of constant table churn on the triggered inserts.

-Dan

pgsql-performance by date:

Previous
From: "Heikki Linnakangas"
Date:
Subject: Re: insert/update tps slow with indices on table > 1M rows
Next
From: Kenneth Marshall
Date:
Subject: Re: query performance question