Re: Large DB, several tuning questions: Index sizes, VACUUM, REINDEX, Autovacuum - Mailing list pgsql-performance

From Andres Freund
Subject Re: Large DB, several tuning questions: Index sizes, VACUUM, REINDEX, Autovacuum
Date
Msg-id 200912052100.00518.andres@anarazel.de
Whole thread Raw
In response to Large DB, several tuning questions: Index sizes, VACUUM, REINDEX, Autovacuum  (Andreas Thiel <andreas.thiel@u-blox.com>)
List pgsql-performance
Hi,

On Saturday 05 December 2009 00:03:12 Andreas Thiel wrote:
> I'm running PostgreSQL 8.3.6 on a 32-Bit Centos 4 machine (which I
> probably should update to 64 Bit soon)
How much memory?


> I'm going to work on the table size of the largest table (result_orig)
> itself by eliminating columns, stuffing n Booleans into bit(n)'s,
> replacing double precision by reals, etc.. By this I should be able to
> reduce the storage per row to ~1/3 of the bytes currently used.
That sounds rather ambitous - did you factor in the per row overhead?

> I have the same information stored in an Oracle 10g DB which consumes
> only 70G data and 2G for indexes. The schema may be better optimized,
> but for sure there is a table with 4 billion rows inside as well. So
> it's about 10x smaller in disk space than PgSQL. I wonder why.
Thats hard to say without seeing the table definition for both. Could you post
it?

2GB for indexes sounds rather small - those are btrees?

It might also be interesting to look into the freespacemap to see how much
empty space there is - there is a contrib module pg_freespacemap for that.

You can also check how much dead tuples a 'ANALYZE VERBOSE tablename' sees.

> Is such disk usage for indexes expected? What can I do to optimize? I
> could not run yet a VACUUM on result_orig, as I hit into max_fsm_pages
> limit (still trying to adjust that one). I tried REINDEX, it didn't
> change anything.
So its quite possible that your relations are heavily bloated - altough if you
reindex that shouldnt matter that much.

Btw, have you possibly left over some old prepared transactions or an idle in
transaction connection? Both can lead to sever bloat.
For the former you can check the system table pg_prepared_xact for the latter
pg_stat_activity.

> ### My Issue No. 2: relpages and VACUUM
> I have another table "test" which is - as starting point - created by
> INSERTs and then UPDATE'd. It has the same columns and roughly the same
> number of rows as table test_orig,  but consumes 160 times the number of
> pages. I tried VACUUM on this table but it did not change anything on
> its relpages count. Maybe this is just because VACUUM without FULL does
> not re-claim disk space, i.e. relpages stays as it is? I did observe
> that after VACUUM, a REINDEX on this table did considerably shrink down
> the size of its indexes (test_test_id, test_lt_id).
A normal VACUUM does not move tuples around - it only marks space as free so
it can later be filled.

(If the free space is trailing it tries to free it if there are no locks
preventing it).

> ### My Issue No 3: VACCUM FULL out of memory
> I tried to do a VACCUM FULL on the two tables (test, result_orig)
> mentioned above. In both cases it fails with a very low number on out of
> memory like this:
>
> ERROR:  out of memory
> DETAIL:  Failed on request of size 224.
Well, thats the number of memory its trying to allocate, not the amount it has
allocated. Normally the postmaster should output some sort of memory map when
that happens. Did you get anything like that?

> I use these kernel settings:
> kernel.shmmni = 4096
> kernel.shmall = 2097152
> kernel.shmmax = 2147483648
> vm.overcommit_memory = 2

> max_stack_depth = 8MB                   # min 100kB
That sounds a bit too high if you count in that libc and consorts may use some
stack space as well - although that should be unrelated to the current issue.

> max_fsm_pages = 70000000                # min max_fsm_relations*16, 6
> bytes each
As a very rough guide you can start with the sum of relpages in pg_class for
that one.

> max_fsm_relations = 4194304             # min 100, ~70 bytes each
That seems kinda high. Do you have multiple millions of relations? It might be
related to the oom situation during vacuum full, although it seems rather
unlikely.

> ###My Issue No. 4: Autovacuum
> I have the feeling that Autovacuum is not really running, else why are
> tables and indexes growing that much, especially "test" table?
You should see notes about autovacuum in the locks. With an
autovacuum_vacuum_scale_factor of  0.2 you need
0.002 times the size of a table in changed tuples before autovacuum starts.
For a billion thats quite a bit. I found that this setting often is too high.

> How would I check it is running correctly? I don't see any error
> messages in syslog from autovacuum.
You should see messages about it starting in the syslog.


Andres

pgsql-performance by date:

Previous
From: Jeremy Harris
Date:
Subject: Re: performance while importing a very large data set in to database
Next
From: "Ing . Marcos Luís Ortíz Valmaseda"
Date:
Subject: Re: performance while importing a very large data set in to database