Re: Large Table With Only a Few Rows - Mailing list pgsql-performance

From Chris Browne
Subject Re: Large Table With Only a Few Rows
Date
Msg-id 60slq4zpbt.fsf@dba2.int.libertyrms.com
Whole thread Raw
Responses Re: Large Table With Only a Few Rows  ("Peter Childs" <peterachilds@gmail.com>)
List pgsql-performance
"Nik" <XLPizza@gmail.com> writes:
> I have a table that has only a few records in it at the time, and they
> get deleted every few seconds and new records are inserted. Table never
> has more than 5-10 records in it.
>
> However, I noticed a deteriorating performance in deletes and inserts
> on it. So I performed vacuum analyze on it three times (twice in a row,
> and once two days later). In the statistics it says that the table size
> is 863Mb, toast table size is 246Mb, and indexes size is 134Mb, even
> though the table has only 5-10 rows in it it. I was wondering how can I
> reclaim all this space and improve the performance?

You need to run VACUUM ANALYZE on this table very frequently.

Based on what you describe, "very frequently" should be on the order
of at least once per minute.

Schedule a cron job specifically to vacuum this table, with a cron
entry like the following:

* * * * * /usr/local/bin/vacuumdb -z -t my_table -p 5432 my_database

Of course, you need to bring it back down to size, first.

You could run CLUSTER on the table to bring it back down to size;
that's probably the fastest way...

   cluster my_table_pk on my_table;

VACUUM FULL would also do the job, but probably not as quickly.
--
(reverse (concatenate 'string "gro.gultn" "@" "enworbbc"))
http://cbbrowne.com/info/sgml.html
"Now they can put you in jail if they *THINK* you're gonna commit a
crime.  Let me say that again, because it sounds vaguely important"
--george carlin

pgsql-performance by date:

Previous
From: Alvaro Herrera
Date:
Subject: Re: The trigger can be specified to fire on time condition?
Next
From: Chris Browne
Date:
Subject: Re: The trigger can be specified to fire on time condition?