Thread: Re: [HACKERS] Hashjoin status report
Hello,
(using snapshot of May, 5th)
because we have the need to have a workaround to this hash problem, I looked into the hashing code (well, without having the background).
For reducing the probability of an overflow I increased
#define FUDGE_FAC 3
witch was originally 1.5. I think it´s a data dependend constant (correct?) and for my data it works...
It does the job, but certainly that this is not the solution.
Increasing -B 256 doesn´t work:
NOTICE: Buffer Leak: [248] (freeNext=0, freePrev=0, relname=, blockNum=0, flags=0x0, refcount=0 25453)
pq_flush: send() failed, errno 88
pq_recvbuf: recv() failed, errno=88
pq_flush: send() failed, errno 88
pq_recvbuf: recv() failed, errno=88
Kind regards,
Michael Contzen
Dohle Systemberatung, Germany
Kind regards,
Michael Contzen
Dohle Systemberatung, Germany
Michael Contzen <mcontzen@dohle.com> writes: > (using snapshot of May, 5th) > because we have the need to have a workaround to this hash problem, I > looked into the hashing code (well, without having the background). > For reducing the probability of an overflow I increased > #define FUDGE_FAC 3 > witch was originally 1.5. For a given -B setting, that would mean that more of the hashtable space is reserved for overflow records and less for hashbuckets, which should reduce the probability of an overrun --- but it would also make the system more prone to decide that it needs to divide the hash merge into "batches", so performance will suffer. Still, it seems like a reasonable workaround until a proper fix can be made. In fact I think maybe I should change FUDGE_FAC to 2.0 for the 6.5 release, as a stopgap measure... A more critical problem is that there were some severe bugs in the code for handling batches. I fixed at least some of 'em, but I committed those fixes on the evening of 5 May, so I suspect they are not in your snapshot. (Check the date of src/backend/executor/nodeHash.c to see.) > Increasing -B 256 doesn't work: > NOTICE: Buffer Leak: [248] (freeNext=3D0, freePrev=3D0, relname=3D, = > blockNum=3D0, flags=3D0x0, refcount=3D0 25453) > pq_flush: send() failed, errno 88 This behavior could be an artifact of one of the bugs I fixed (which was a large-scale memory clobber). Or it could be another bug entirely. This one actually worries me a great deal more than the "out of memory" problem, because that one I know how and where to fix. If this is a separate bug then I don't know where it's coming from. Please upgrade to latest snapshot and check -B 256 again. regards, tom lane
> Michael Contzen <mcontzen@dohle.com> writes: > > (using snapshot of May, 5th) > > because we have the need to have a workaround to this hash problem, I > > looked into the hashing code (well, without having the background). > > For reducing the probability of an overflow I increased > > #define FUDGE_FAC 3 > > witch was originally 1.5. > > For a given -B setting, that would mean that more of the hashtable space > is reserved for overflow records and less for hashbuckets, which should > reduce the probability of an overrun --- but it would also make the > system more prone to decide that it needs to divide the hash merge into > "batches", so performance will suffer. Still, it seems like a > reasonable workaround until a proper fix can be made. In fact I think > maybe I should change FUDGE_FAC to 2.0 for the 6.5 release, as a stopgap > measure... > > A more critical problem is that there were some severe bugs in the code > for handling batches. I fixed at least some of 'em, but I committed > those fixes on the evening of 5 May, so I suspect they are not in your > snapshot. (Check the date of src/backend/executor/nodeHash.c to see.) One thing to consider. If you decide to wait on the patch until after 6.5, but then we find the new optimizer is causing this bug too often, we will have to fix it later in the beta cycle with less testing time available. -- Bruce Momjian | http://www.op.net/~candle maillist@candle.pha.pa.us | (610) 853-3000+ If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup. | Drexel Hill, Pennsylvania19026