Re: More speed counting rows - Mailing list pgsql-performance

From Chris Ernst
Subject Re: More speed counting rows
Date
Msg-id 4A6DC0BC.309@esoft.com
Whole thread Raw
In response to More speed counting rows  (Developer <dev002@pas-world.com>)
Responses Re: More speed counting rows  (Developer <dev002@pas-world.com>)
List pgsql-performance
Developer wrote:
> Hello,
>
> I am trying to optimize the count of files when I am using filters
> (select by some row/s parameter/s)
>
> In this case I think that postgresql really count all files.
> Resulting in unacceptable times of 4 seconds in http server response.
> Triggers+store in this case do not see very acceptable, because I need
> store 1.5 millions of counting possibilities.
>
> My question is:
> Any method for indirect count like ordered indexes + quadratic count?
> Any module?
> Any suggestion?
>

I had a similar problem where HTTP requests triggered a count(*) over a
table that was growing rapidly.  The bigger the table got, the longer
the count took.  In my case, however, the counts only have to be a
reasonable estimate of the current state, so I solved this problem with
a count_sums table that gets updated every 30 minutes using a simple
perl script in a cron job.  The HTTP requests now trigger a very fast
select from a tiny, 9 row, 2 column table.

How "up to date" do the counts need to be?  If the count takes 4
seconds, can you run it every minute and store the counts in a table for
retrieval by the HTTP requests?  Or does it absolutely have to be the
exact count at the moment of the request?

If it needs to be more real-time, you could expand on this by adding
post insert/delete triggers that automatically update the counts table
to keep it current.  In my case it just wasn't necessary.

    - Chris

pgsql-performance by date:

Previous
From: "Kevin Grittner"
Date:
Subject: Re: select query performance question
Next
From: David Wilson
Date:
Subject: Re: More speed counting rows