Re: using a lot of maintenance_work_mem - Mailing list pgsql-hackers

From Frederik Ramm
Subject Re: using a lot of maintenance_work_mem
Date
Msg-id 4D5EC911.2030404@remote.org
Whole thread Raw
In response to Re: using a lot of maintenance_work_mem  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: using a lot of maintenance_work_mem
List pgsql-hackers
Tom & Kevin,
   thank you for your replies. Kevin, I had already employed all the 
tricks you mention, except using temporary tables which would be hard 
for me due to the structure of my application (but I could try using 
something like pgbouncer or so), but thanks a lot for sharing the ideas.

Tom Lane wrote:
>> If I were to either (a) increase MaxAllocSize to, say, 48 GB instead of 
>> 1 GB, or (b) hack tuplesort.c to ignore MaxAllocSize, just for my local 
>> setup - would that likely be viable in my situation, or would I break 
>> countless things?
> 
> You would break countless things.

Indeed I did. I tried to raise the MaxAllocSize from 1 GB to a large 
number, but immediately got strange memory allocation errors during the 
regression test (something that looked like a wrapped integer in a 
memory allocation request).

I reduced the number in steps, and found I could compile and run 
PostgreSQL 8.3 with a MaxAllocSize of 4 GB, and PostgreSQL 9.0 with 2 GB 
without breakage.

In a completely un-scientific test run, comprising 42 individual SQL 
statements aimed at importing and indexing a large volume of data, I got 
the following results:

pg8.3 with normal MaxAllocSize .................. 15284s
pg8.3 with MaxAllocSize increased to 4 GB ....... 14609s (-4.5%)
pg9.0 with normal MaxAllocSize .................. 12969s (-15.2%)
pg9.0 with MaxAllocSize increased to 2 GB ....... 13211s (-13.5%)

> I'd want to see some evidence that it's actually
> helpful for production situations.  I'm a bit dubious that you're going
> to gain much here.

So, on the whole it seems you were right; the performance, at least with 
that small memory increase I managed to build in without breaking 
things, doesn't increase a lot, or not at all for PostgreSQL 9.0.

The single query that gained most from the increase in memory was an 
ALTER TABLE statement to add a BIGINT primary key to a table with about 
50 million records - this was 75% faster on the both 8.3 and 9.0 but 
since it took only 120 seconds to begin with, didn't change the result a 
lot.

The single query where pg9.0 beat pg8.3 by a country mile was a CREATE 
INDEX statement on a BIGINT column to a table with about 500 million 
records - this cost 2679 seconds on normal 8.3, 2443 seconds on 
large-memory 8.3, and aroung 1650 seconds on 9.0, large memory or not.

The query that, on both 8.3 and 9.0, took about 10% longer with more 
memory was a CREATE INDEX statement on a TEXT column.

All this, as I said, completely un-scientific - I did take care to flush 
caches and not run anything in parallel, but that was about all I did so 
it might come out differently when run often.

My result of all of this? Switch to 9.0 of course ;)

Bye
Frederik

-- 
Frederik Ramm  ##  eMail frederik@remote.org  ##  N49°00'09" E008°23'33"


pgsql-hackers by date:

Previous
From: Robert Haas
Date:
Subject: Re: Assertion failure on UNLOGGED VIEW and SEQUENCE
Next
From: Bruce Momjian
Date:
Subject: Re: pg_basebackup and wal streaming