Re: DBT-3 with SF=20 got failed - Mailing list pgsql-hackers

From Tomas Vondra
Subject Re: DBT-3 with SF=20 got failed
Date
Msg-id 56042FAE.5000603@2ndquadrant.com
Whole thread Raw
In response to Re: DBT-3 with SF=20 got failed  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-hackers

On 09/24/2015 07:04 PM, Tom Lane wrote:
> Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:
>> But what about computing the number of expected batches, but always
>> start executing assuming no batching? And only if we actually fill
>> work_mem, we start batching and use the expected number of batches?
>
> Hmm. You would likely be doing the initial data load with a "too
> small" numbuckets for single-batch behavior, but if you successfully
> loaded all the data then you could resize the table at little
> penalty. So yeah, that sounds like a promising approach for cases
> where the initial rowcount estimate is far above reality.

I don't understand the comment about "too small" numbuckets - isn't 
doing that the whole point of using the proposed limit? The batching is 
merely a consequence of how bad the over-estimate is.

> But I kinda thought we did this already, actually.

I don't think so - I believe we haven't modified this aspect at all. It 
may not have been as pressing thanks to NTUP_PER_BUCKET=10 in the past.

regards

--
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



pgsql-hackers by date:

Previous
From: Joe Conway
Date:
Subject: Re: No Issue Tracker - Say it Ain't So!
Next
From: Tom Lane
Date:
Subject: Re: No Issue Tracker - Say it Ain't So!