Thread: table size is bigger than expected

table size is bigger than expected

From
Jian Shi
Date:

Hey,

 

  I’m a new user of PostgreSQL. I found one of my tables is taking unexpectedly large space:

 

select pg_size_pretty(pg_relation_size('archive_files'));                                                                     

 pg_size_pretty

----------------

1113 MB

The structure of this table is like:

   Column   |       Type        | Modifiers

------------+-------------------+-----------

archive_id | integer           | not null

dir_no     | integer           | not null

fname      | character varying | not null

type       | smallint          | not null

size       | bigint            | not null

mod_date   | integer           | not null

blocks     | bigint            | not null

blk_offset | bigint            | not null

 

the field “fname” stores file names without any directory names. In our case, each record is expected to take around 300 bytes.

 

However, this table contains 934829 records, which means each record takes about 1.2KB.

 

I did vaccum, reindex, the size is still the same. Is there anything else that I can do?

 

Thanks!

 

John

Re: table size is bigger than expected

From
Justin Pitts
Date:
On Thu, Aug 4, 2011 at 2:56 PM, Jian Shi <jshi@unitrends.com> wrote:
> Hey,
>
>   I’m a new user of PostgreSQL. I found one of my tables is taking
> unexpectedly large space:
>
> select
> pg_size_pretty(pg_relation_size('archive_files'));
>
>  pg_size_pretty
>
> ----------------
>
> 1113 MB
>
>
> the field “fname” stores file names without any directory names. In our
> case, each record is expected to take around 300 bytes.
>
> However, this table contains 934829 records, which means each record takes
> about 1.2KB.
>
what does this query yield?

select pg_size_pretty(sum(length(fname))) from archive_files;

Re: table size is bigger than expected

From
Steve Crawford
Date:
On 08/04/2011 11:56 AM, Jian Shi wrote:

Hey,

 

  I’m a new user of PostgreSQL. I found one of my tables is taking unexpectedly large space:...

 

I did vaccum, reindex, the size is still the same. Is there anything else that I can do?

 

Did you try CLUSTER? A basic vacuum only identifies space as reusable, it doesn't actually shrink on-disk size.

If you have workloads that update or delete a small number of tuples per transaction, the autovacuum process should keep things reasonably under control. But if you run transactions that do bulk updates or deletes, you may need to intervene. The CLUSTER statement will completely rewrite and reindex your table (and will physically reorder the table based on the selected index). Note: CLUSTER requires an exclusive lock on the table.

Cheers,
Steve