On 09/24/2012 15:51, Kiriakos Tsourapas wrote:
> Hi,
>
> Thank you for your response.
> Please find below my answers/comments.
>
>
> On Sep 24, 2012, at 15:21, Julien Cigar wrote:
>
>> Hello,
>>
>> 1) upgrade your PostgreSQL installation, there have been numerous bugfixes releases since 8.4.2
> Not possible right now. It will have to be the last solution.
>> 2) you'll have to show us an explain analyze of the slow queries. If I take a look at those you provided everything
runi less than 1ms.
> Will do so in a couple of days that it will get slow again.
>> 3) with 200 records you'll always have a seqscan
> Does it really matter? I mean, with 200 records any query should be ultra fast. Right ?
right..!
>> 4) how much memory do you have ? shared_buffers = 256MB and effective_cache_size = 512MB looks OK only if you have
between1 and 2GB of RAM
> I have included the server specs and the results of top commands, showing that we have 8GB ram and how much memory is
used/cached/swapped.Personally I don't quite understand the linux memory, but I have posted them hoping you may see
somethingI don't.
with 8GB of RAM I would start with shared_buffers to 1GB and
effective_cache_size to 4GB. I would also change the default work_mem to
32MB and maintenance_work_mem to 512MB
>> 5) synchronous_commit = off should only be used if you have a battery-backed write cache.
> I agree with the comments that have followed my post. I have changed it, knowing there is a small risk, but hoping it
willhelp our performance.
>> 6) autovacuum_naptime should be changed only if autovacuum is constantly running (so if you have dozen of databases
inyour cluster)
> As I said, changing the autovacuum values have not changed the problem. So, you may as well consider that we have the
defaultvalues for autovacuuming... the problem existed with the default values too.
>> 7) are you sure the problem isn't related to Bucardo ?
> Not at all sure... I have no idea. Can you suggest of a way to figure it out ?
Unfortunately I never used Bucardo, but be sure that it's not a problem
with your network (and that you understand all the challenges involved
in multi-master replication)
>
>
> Thank you
--
No trees were killed in the creation of this message.
However, many electrons were terribly inconvenienced.