Gerald Gutierrez writes:
> I have a table with about 5 million rows in it. I need to be able to get the
> exact number of rows in the table at runtime. So I tried the following:
>
> xxx=> explain select count(seed) from mytable;
> NOTICE: QUERY PLAN:
>
> Aggregate (cost=103152.27..103152.27 rows=1 width=4)
> -> Seq Scan on mytable(cost=0.00..89756.42 rows=5358342 width=4)
> Actually executing this query takes about 2 minutes on a P3-800MHz machine
> with 512MB of RAM.
>
> I have an index on the seed table, and I have done VACUUM ANALYZE on the
> table after inserting the rows. Is there any way I can get this to be fast?
For a count of all rows you necessarily need to visit all rows (at least
in this implementation), so an index is of no use. For a sequential scan
with little computation involved this really comes down to pure hardware
speed. You might be able to speed it up a little by using count(*)
instead. Note that there's a semantic difference, because count(seed)
doesn't count the rows where seed is null, which is probably not what you
intended anyway.
--
Peter Eisentraut peter_e@gmx.net http://yi.org/peter-e/