Re: optimizer cost calculation problem - Mailing list pgsql-hackers

From Tatsuo Ishii
Subject Re: optimizer cost calculation problem
Date
Msg-id 20030401.091345.85415532.t-ishii@sra.co.jp
Whole thread Raw
In response to Re: optimizer cost calculation problem  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: optimizer cost calculation problem  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-hackers
> Tatsuo Ishii <t-ishii@sra.co.jp> writes:
> > Kenji Sugita has identified a problem with cost_sort() in costsize.c.
> > In the following code fragment, sortmembytes is defined as long. So
> >         double        nruns = nbytes / (sortmembytes * 2);
> > may cause an integer overflow if sortmembytes exceeds 2^30, which in
> > turn make optimizer to produce wrong query plan(this actually happned
> > in a large PostgreSQL installation which has tons of memory).
> 
> I find it really really hard to believe that it's wise to run with
> sort_mem exceeding 2 gig ;-).  Does that installation have so much
> RAM that it can afford to run multiple many-Gb sorts concurrently?

The process is assigned 1 gig sort mem to speed up a batch job by
uisng backend-process-only sort mem setting, and they do not modify
postgresql.conf for ordinaly user.

BTW it does not 2 gig, but 1 gig (remember that we do sortmembytes *
2) . 

> This is far from being the only place that multiplies SortMem by 1024.
> My inclination is that a safer fix is to alter guc.c's entry for
> SortMem to establish a maximum value of INT_MAX/1024 for the variable.
> 
> Probably some of the other GUC variables like shared_buffers ought to
> have overflow-related maxima established, too.
> 
>             regards, tom lane



pgsql-hackers by date:

Previous
From: Kevin Brown
Date:
Subject: Re: deadlock problem
Next
From: "Ed L."
Date:
Subject: Re: index corruption?