Steve Crawford wrote:
> Ivan Sergio Borgonovo wrote:
>> I still have to investigate if the tables are getting really
>> larger... but at a first guess there shouldn't be any good reason to
>> see tables getting so large so fast... so I was wondering if
>> anything could contribute to make a backup much larger than it was
>> other than table containing more records?
>>
>> The only thing that should have been really changed is the number of
>> concurrent connections during a backup.
>>
> Can we assume that by backup you mean pg_dump/pg_dumpall? If so, then
> the change is likely due to increasing data in the database. I have a
> daily report that emails me a crude but useful estimate of table
> utilization based on this query:
>
> select
> relname as table,
> to_char(8*relpages, '999,999,999') as "size (kB)",
> (100.0*relpages/(select sum(relpages) from pg_class where
> relkind='r'))::numeric(4,1) as percent
> from
> pg_class
> where
> relkind = 'r'
> order by
> relpages desc
> limit 20;
The better way to do this would likely be to use the pg_*_size functions
detailed here:
http://www.postgresql.org/docs/8.3/static/functions-admin.html#FUNCTIONS-ADMIN-DBSIZE
In particular pg_total_relation_size() , |pg_size_pretty|(), and the
like... Seems much more straightforward than the queries mentioned above..
--
Chander Ganesan
Open Technology Group, Inc.
One Copley Parkway, Suite 210
Morrisville, NC 27560
919-463-0999/877-258-8987
http://www.otg-nc.com
Ask me about expert PostgreSQL training, delivered worldwide!