Re: Postgresql vs. aggregates - Mailing list pgsql-general

From Richard Huxton
Subject Re: Postgresql vs. aggregates
Date
Msg-id 40C807BD.6080905@archonet.com
Whole thread Raw
In response to Re: Postgresql vs. aggregates  (jao@geophile.com)
List pgsql-general
jao@geophile.com wrote:

> But that raises an interesting idea. Suppose that instead of one
> summary row, I had, let's say, 1000. When my application creates
> an object, I choose one summary row at random (or round-robin) and update
> it. So now, instead of one row with many versions, I have 1000 with 1000x
> fewer versions each. When I want object counts and sizes, I'd sum up across
> the 1000 summary rows. Would that allow me to maintain performance
> for summary updates with less frequent vacuuming?

Perhaps the simplest approach might be to define the summary table as
containing a SERIAL and your count.
Every time you add another object insert (nextval(...), 1)
Every 10s summarise the table (i.e. replace 10 rows all "scored" 1 with
1 row scored 10)
Use sum() over the much smaller table to find your total.
Vacuum regularly.


--
   Richard Huxton
   Archonet Ltd

pgsql-general by date:

Previous
From: Tom Lane
Date:
Subject: Re: pg_dump and schema namespace notes
Next
From: Thomas Hallgren
Date:
Subject: How to tell when postmaster is ready