In response to ".ep" <erick.papa@gmail.com>:
> On Aug 10, 9:42 pm, andreas.kretsch...@schollglas.com ("A.
> Kretschmer") wrote:
> > am Fri, dem 10.08.2007, um 17:46:11 +0800 mailte carter ck folgendes:
> >
> > > Hi all,
> >
> > > I am facing a performance issue here. Whenever I do a count(*) on a table
> > > that contains about 300K records, it takes few minutes to complete. Whereas
> > > my other application which is counting > 500K records just take less than
> > > 10 seconds to complete.
> >
> > > I have indexed all the essential columns and still it does not improve the
> > > speed.
> >
> > Indexes don't help in this case, a 'select count(*)' forces a seq. scan.
> > Do you realy need this information? An estimate for the number of rows
> > can you find in the system catalog (reltuples in pg_class,
seehttp://www.postgresql.org/docs/current/interactive/catalog-pg-class.html)
>
>
>
> Hi, what if I need to do a count with a WHERE condition? E.g.,
>
> SELECT count(*) from customers where cust_id = 'georgebush' and
> created_on > current_date - interval '1 week' ;
>
> Can I get the info about this from somewhere in the pg system tables
> as well? Queries like these are very common in most applications, so
> I'm hoping I can avoid the sequential scans!
>
> Many thanks for any tips.
If you only need an estimate, you can do an "explain" of the query, and
grep out the row count. The accuracy of this will vary depending on the
statistics, but it's very fast and works with a query of any complexity.
If you need fast, accurate counts, your best bet is to set up triggers on
your tables to maintain counts in a separate table. This can be rather
complex to set up, and you take a performance hit during inserts and updates,
but I don't know of any other way to do it.
--
Bill Moran
http://www.potentialtech.com