Thread: Cost problem

Cost problem

From
Victor
Date:
Hello.

I use psql (PostgreSQL) 7.4.5.

I have a cost problem.

A simple select lasts too long.
I have a table with aprox 900 rows.
All rows are deleted and reinserted once a minute.

EXPLAIN SELECT * FROM logati;
                            QUERY PLAN
-------------------------------------------------------------------
 Seq Scan on logati  (cost=0.00..100741.22 rows=1198722 width=340)


EXPLAIN ANALYZE SELECT * FROM logati;
                                                      QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------
 Seq Scan on logati  (cost=0.00..100741.22 rows=1198722 width=340)
(actual time=35927.945..35944.272 rows=842 loops=1)
 Total runtime: 35945.840 ms
(2 rows)


Tha problem is that "EXPLAIN" function raports that the table has
1198722 rows.
But the table has only 836 rows and the select lasts a lot.

If I recreate the table the number of rows is ok and the select is quick
for some time, but it becomes slower in time.

How can I fix this?

PS: The table has a index but no primary key.

Thank you

Re: Cost problem

From
Jaime Casanova
Date:
On 8/2/05, Victor <victor@ambra.ro> wrote:
> Hello.
>
> I use psql (PostgreSQL) 7.4.5.
>
> I have a cost problem.
>
> A simple select lasts too long.
> I have a table with aprox 900 rows.
> All rows are deleted and reinserted once a minute.
>
> EXPLAIN SELECT * FROM logati;
>                            QUERY PLAN
> -------------------------------------------------------------------
>  Seq Scan on logati  (cost=0.00..100741.22 rows=1198722 width=340)
>
>
> EXPLAIN ANALYZE SELECT * FROM logati;
>                                                      QUERY PLAN
>
-----------------------------------------------------------------------------------------------------------------------
>  Seq Scan on logati  (cost=0.00..100741.22 rows=1198722 width=340)
> (actual time=35927.945..35944.272 rows=842 loops=1)
>  Total runtime: 35945.840 ms
> (2 rows)
>
>
> Tha problem is that "EXPLAIN" function raports that the table has
> 1198722 rows.
> But the table has only 836 rows and the select lasts a lot.
>
> If I recreate the table the number of rows is ok and the select is quick
> for some time, but it becomes slower in time.
>
> How can I fix this?
>
> PS: The table has a index but no primary key.
>
> Thank you
>
> ---------------------------(end of broadcast)---------------------------
> TIP 6: explain analyze is your friend
>

execute ANALYZE in the table from time to time...  And maybe and
better vacuum...


--
Atentamente,
Jaime Casanova
(DBA: DataBase Aniquilator ;)

Re: Cost problem

From
Ragnar Hafstað
Date:
On Tue, 2005-08-02 at 17:32 +0300, Victor wrote:
> Hello.
>
> I use psql (PostgreSQL) 7.4.5.
>
> I have a cost problem.
>
> A simple select lasts too long.
> I have a table with aprox 900 rows.
> All rows are deleted and reinserted once a minute.
>

> EXPLAIN ANALYZE SELECT * FROM logati;
>                                                       QUERY PLAN
>
-----------------------------------------------------------------------------------------------------------------------
>  Seq Scan on logati  (cost=0.00..100741.22 rows=1198722 width=340)
> (actual time=35927.945..35944.272 rows=842 loops=1)
>  Total runtime: 35945.840 ms

The rows are not actually removed when DELETE happens.
the next VACUUM vill put them into the Free Space Map
if they are older than the oldest running transaction,
so that their space can be reused.

this means that you must VACUUM this table frequently
to keep it from bloating. If the data distribution keeps
changing , you should also ANALYZE frequently (but not
when the table in empty), but that is another matter.

to fix an already severely bloated table, you might
need to do a VACUUM FULL on it once, but after that
frequent-enough regular (non-FULL) VACUUMS should do.

depending on the nature of your processes, you might
want to use TRUNCATE to empty your table.

gnari