On Wed, Dec 30, 2015 at 11:20 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Cory Tucker <cory.tucker@gmail.com> writes: > This table is almost always queried using a combination of (account_id, > record_id) and is generally pretty fast. However, under certain loads, the > query becomes slower and slower as time goes on. The workload that causes > this to happen is when data for a new account_id is being inserted into the > table. This will happen in rapid succession and may insert millions of > rows over the course of several hours.
Are those insertions happening in one enormous transaction, or even just a few very large ones?
No, one transaction per row insert.
> The pattern that I notice when this happens is that the CPU on DB will be > pegged much higher than usual, and the query to lookup records for the > (account_id, record_id) combo will steadily rise from <1ms to more then 2 > or 3 seconds over time.
I'm suspicious that this is not autovacuum's fault but reflects the cost of checking uncommitted tuples to see if they've become committed yet. If so, there may be little you can do about it except break the insertion into smaller transactions ... which might or might not be all right from a data consistency standpoint.