Re: Performance tuning on RedHat Enterprise Linux 3 - Mailing list pgsql-general

From Lincoln Yeoh
Subject Re: Performance tuning on RedHat Enterprise Linux 3
Date
Msg-id 5.2.1.1.1.20041207201144.02e14fa0@localhost
Whole thread Raw
In response to Re: Performance tuning on RedHat Enterprise Linux 3  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-general
--=======67E74690=======
Content-Type: text/plain; x-avg-checked=avg-ok-39E01936; charset=us-ascii; format=flowed
Content-Transfer-Encoding: 8bit

But isn't the problem when the planner screws up and not the sortmem setting?

There was my case where the 7.4 planner estimated 1500 distinct rows when
there were actually 1391110. On 7.3.4 it used about 4.4MB. Whereas 7.4
definitely used more than 400MB for the same query ) - I had to kill
postgresql - didn't wait for it to use more. That's a lot more than 200%.
Maybe 3x sort_mem is too low, but at least by default keep it below server
RAM/number of backends or something like that.

Even if the planner has improved a lot if cases like that still occur from
time to time it'll be a lot better for stability/availability if there's a
limit.

Doubt if I still have the same data to test on 8.0.

Link.

At 12:35 AM 12/7/2004 -0500, Tom Lane wrote:
>Neil Conway <neilc@samurai.com> writes:
> > As a quick hack, what about throwing away the constructed hash table and
> > switching to hashing for sorting if we exceed sort_mem by a significant
> > factor? (say, 200%) We might also want to print a warning message to the
> > logs.
>
>If I thought that a 200% error in memory usage were cause for a Chinese
>fire drill, then I'd say "yeah, let's do that".  The problem is that the
>place where performance actually goes into the toilet is normally an
>order of magnitude or two above the nominal sort_mem setting (for
>obvious reasons: admins can't afford to push the envelope on sort_mem
>because of the various unpredictable multiples that may apply).  So
>switching to a hugely more expensive implementation as soon as we exceed
>some arbitrary limit is likely to be a net loss not a win.
>
>If you can think of a spill methodology that has a gentle degradation
>curve, then I'm all for that.  But I doubt there are any quick-hack
>improvements to be had here.
>
>                         regards, tom lane
>
>---------------------------(end of broadcast)---------------------------
>TIP 3: if posting/reading through Usenet, please send an appropriate
>       subscribe-nomail command to majordomo@postgresql.org so that your
>       message can get through to the mailing list cleanly


--=======67E74690=======--


pgsql-general by date:

Previous
From: "Ed L."
Date:
Subject: Re: vacuum problem?
Next
From: Doug McNaught
Date:
Subject: Re: Postgres not using shared memory