Thread: select count is too slow

select count is too slow

From
Kumar Anand
Date:
Dear all,

I face this problem from last few days.

here is test2 table  with only one column id
erp_test=> \d test2
     Table "public.test2"
 Column |  Type   | Modifiers
--------+---------+-----------
 id     | integer |

I insert 10,000,000 entries in this table.
erp_test=> INSERT INTO test2 VALUES (generate_series(1,10000000));
INSERT 0 10000000

then i delete all the entries
erp_test=> delete from test2;
DELETE 10000000

and i insert only 10 entries
erp_test=> INSERT INTO test2 VALUES (generate_series(1,10));
INSERT 0 10

now i try to count no of rows in this table which take long time for
this 10 rows only
(about 2-3 second)
erp_test=> SELECT count(*) from test2;
 count
-------
    10
(1 row)

this is the output of explain analyze query of the same

erp_test=> EXPLAIN ANALYZE SELECT count(*) from test2;
                                                       QUERY
PLAN

------------------------------------------------------------------------------------------------------------------------
 Aggregate  (cost=198652.13..198652.14 rows=1 width=0) (actual
time=2123.471..2123.472 rows=1 loops=1)
   ->  Seq Scan on test2  (cost=0.00..169732.70 rows=11567770 width=0)
(actual time=2123.454..2123.454 rows=10 loops=1)
 Total runtime: 2123.609 ms
(3 rows)


can any one  solve my problem to speed up my select count query.

--
Thanks & Regards
Kumar Anand

Re: select count is too slow

From
bricklen
Date:
Did you vacuum your table after the delete?

On Thu, Aug 20, 2009 at 2:36 AM, Kumar Anand <kumar.anand@merceworld.com> wrote:
Dear all,

I face this problem from last few days.

here is test2 table  with only one column id
erp_test=> \d test2
   Table "public.test2"
Column |  Type   | Modifiers
--------+---------+-----------
id     | integer |

I insert 10,000,000 entries in this table.
erp_test=> INSERT INTO test2 VALUES (generate_series(1,10000000));
INSERT 0 10000000

then i delete all the entries
erp_test=> delete from test2;
DELETE 10000000

and i insert only 10 entries
erp_test=> INSERT INTO test2 VALUES (generate_series(1,10));
INSERT 0 10

now i try to count no of rows in this table which take long time for this 10 rows only
(about 2-3 second)
erp_test=> SELECT count(*) from test2;
count
-------
  10
(1 row)

this is the output of explain analyze query of the same

erp_test=> EXPLAIN ANALYZE SELECT count(*) from test2;
                                                     QUERY PLAN                                                      ------------------------------------------------------------------------------------------------------------------------
Aggregate  (cost=198652.13..198652.14 rows=1 width=0) (actual time=2123.471..2123.472 rows=1 loops=1)
 ->  Seq Scan on test2  (cost=0.00..169732.70 rows=11567770 width=0) (actual time=2123.454..2123.454 rows=10 loops=1)
Total runtime: 2123.609 ms
(3 rows)


can any one  solve my problem to speed up my select count query.

--
Thanks & Regards
Kumar Anand

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

Re: select count is too slow

From
bricklen
Date:
VACUUM ANALYZE, rather

On Thu, Aug 20, 2009 at 2:21 PM, bricklen <bricklen@gmail.com> wrote:
Did you vacuum your table after the delete?


On Thu, Aug 20, 2009 at 2:36 AM, Kumar Anand <kumar.anand@merceworld.com> wrote:
Dear all,

I face this problem from last few days.

here is test2 table  with only one column id
erp_test=> \d test2
   Table "public.test2"
Column |  Type   | Modifiers
--------+---------+-----------
id     | integer |

I insert 10,000,000 entries in this table.
erp_test=> INSERT INTO test2 VALUES (generate_series(1,10000000));
INSERT 0 10000000

then i delete all the entries
erp_test=> delete from test2;
DELETE 10000000

and i insert only 10 entries
erp_test=> INSERT INTO test2 VALUES (generate_series(1,10));
INSERT 0 10

now i try to count no of rows in this table which take long time for this 10 rows only
(about 2-3 second)
erp_test=> SELECT count(*) from test2;
count
-------
  10
(1 row)

this is the output of explain analyze query of the same

erp_test=> EXPLAIN ANALYZE SELECT count(*) from test2;
                                                     QUERY PLAN                                                      ------------------------------------------------------------------------------------------------------------------------
Aggregate  (cost=198652.13..198652.14 rows=1 width=0) (actual time=2123.471..2123.472 rows=1 loops=1)
 ->  Seq Scan on test2  (cost=0.00..169732.70 rows=11567770 width=0) (actual time=2123.454..2123.454 rows=10 loops=1)
Total runtime: 2123.609 ms
(3 rows)


can any one  solve my problem to speed up my select count query.

--
Thanks & Regards
Kumar Anand

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: select count is too slow

From
Iñigo Martinez Lasala
Date:
You don't have accurate stats over that table.

Try this after deleting:
VACUUM ANALYZE test2;

I suggest to activate autovacuum in your postgresql.conf, so you won't have to analyze table by hand.

-----Original Message-----
From: Kumar Anand <kumar.anand@merceworld.com>
To: pgsql-admin@postgresql.org
Subject: [ADMIN] select count is too slow
Date: Thu, 20 Aug 2009 15:06:45 +0530

Dear all,

I face this problem from last few days.

here is test2 table  with only one column id
erp_test=> \d test2    Table "public.test2"Column |  Type   | Modifiers
--------+---------+-----------id     | integer |

I insert 10,000,000 entries in this table.
erp_test=> INSERT INTO test2 VALUES (generate_series(1,10000000));
INSERT 0 10000000

then i delete all the entries
erp_test=> delete from test2;
DELETE 10000000
and i insert only 10 entries
erp_test=> INSERT INTO test2 VALUES (generate_series(1,10));
INSERT 0 10

now i try to count no of rows in this table which take long time for 
this 10 rows only
(about 2-3 second)
erp_test=> SELECT count(*) from test2;count
-------   10
(1 row)

this is the output of explain analyze query of the same

erp_test=> EXPLAIN ANALYZE SELECT count(*) from test2;                                                      QUERY 
PLAN                                                      
------------------------------------------------------------------------------------------------------------------------Aggregate  (cost=198652.13..198652.14 rows=1 width=0) (actual 
time=2123.471..2123.472 rows=1 loops=1)  ->  Seq Scan on test2  (cost=0.00..169732.70 rows=11567770 width=0) 
(actual time=2123.454..2123.454 rows=10 loops=1)Total runtime: 2123.609 ms
(3 rows)

can any one  solve my problem to speed up my select count query.

-- 
Thanks & Regards
Kumar Anand

Re: select count is too slow

From
Scott Marlowe
Date:
On Thu, Aug 20, 2009 at 3:36 AM, Kumar Anand<kumar.anand@merceworld.com> wrote:
> Dear all,
>
> I face this problem from last few days.
>
> here is test2 table  with only one column id
> erp_test=> \d test2
>    Table "public.test2"
> Column |  Type   | Modifiers
> --------+---------+-----------
> id     | integer |
>
> I insert 10,000,000 entries in this table.
> erp_test=> INSERT INTO test2 VALUES (generate_series(1,10000000));
> INSERT 0 10000000
>
> then i delete all the entries
> erp_test=> delete from test2;
> DELETE 10000000
>
> and i insert only 10 entries
> erp_test=> INSERT INTO test2 VALUES (generate_series(1,10));
> INSERT 0 10
>
> now i try to count no of rows in this table which take long time for this 10
> rows only
> (about 2-3 second)
> erp_test=> SELECT count(*) from test2;
> count
> -------
>   10
> (1 row)
>
> this is the output of explain analyze query of the same
>
> erp_test=> EXPLAIN ANALYZE SELECT count(*) from test2;
>                                                      QUERY PLAN
>
>
 ------------------------------------------------------------------------------------------------------------------------
> Aggregate  (cost=198652.13..198652.14 rows=1 width=0) (actual
> time=2123.471..2123.472 rows=1 loops=1)
>  ->  Seq Scan on test2  (cost=0.00..169732.70 rows=11567770 width=0) (actual
> time=2123.454..2123.454 rows=10 loops=1)
> Total runtime: 2123.609 ms
> (3 rows)
>
>
> can any one  solve my problem to speed up my select count query.

Don't bloat your tables so much?  In Postgresql, when you delete a
tuple it's still there in case some other transaction might still see
it.  Try truncating your table instead of deleting it.  Right now
pgsql is having to trawl through a few million dead rows to find the
few that are alive.  You can also use vacuum full to reclaim all the
space, but you might need to reindex after that to keep bloat down on
the indexes.