Re: Postgres backend using huge amounts of ram - Mailing list pgsql-performance

From Gary Doades
Subject Re: Postgres backend using huge amounts of ram
Date
Msg-id 41A7873A.7000202@gpdnet.co.uk
Whole thread Raw
In response to Re: Postgres backend using huge amounts of ram  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-performance
Tom Lane wrote:
>
> It's also worth noting that work_mem is temporarily set to
> maintenance_work_mem, which you didn't tell us the value of:
>
It's left at the default. (16384).

This would be OK if that is all it used for this type of thing.

>
>
> My recollection is that hash join chooses hash table partitions partly
> on the basis of the estimated number of input rows.  Since the estimate
> was way off, the actual table size got out of hand a bit :-(

A bit!!

The really worrying bit is that a normal (ish) query also exhibited the
same behaviour. I'm a bit worried that if the stats get a bit out of
date so that the estimate is off, as in this case, a few backends trying
to get this much RAM will see the server grind to a halt.

Is this a fixable bug? It seems a fairly high priority, makes the server
go away, type bug to me.

If you need the test data, I could zip the two tables up and send them
somewhere....

Thanks,
Gary.

pgsql-performance by date:

Previous
From: Tom Lane
Date:
Subject: Re: Postgres backend using huge amounts of ram
Next
From: Rod Taylor
Date:
Subject: Re: time to stop tuning?