Re: postmaster segfaults with HUGE table - Mailing list pgsql-hackers

From Tom Lane
Subject Re: postmaster segfaults with HUGE table
Date
Msg-id 12612.1100570033@sss.pgh.pa.us
Whole thread Raw
In response to Re: postmaster segfaults with HUGE table  (Neil Conway <neilc@samurai.com>)
Responses Re: postmaster segfaults with HUGE table  (Neil Conway <neilc@samurai.com>)
List pgsql-hackers
Neil Conway <neilc@samurai.com> writes:
> Attached is a patch. Not entirely sure that the checks I added are in
> the right places, but at any rate this fixes the three identified
> problems for me.

I think the SELECT limit should be MaxTupleAttributeNumber not
MaxHeapAttributeNumber.  The point of the differential is to allow
you a bit of slop to do extra stuff (like sorting) when selecting
from a max-width table, but the proposed patch takes that away.

As for the placement issue, I'm OK with what you did in tablecmds.c,
but for the SELECT case I think that testing in transformTargetList
is probably not good.  It definitely doesn't do to test at the top
of the routine, because we haven't expanded '*' yet.  But testing at
the bottom probably won't do either since it doesn't cover later
addition of junk attributes --- in other words you could probably still
crash it by specifying >64K distinct ORDER BY values.

What I think needs to happen is to check p_next_resno at some point
after the complete tlist has been built.  Since that's an int, we
don't need to worry too much about it overflowing, so one test at the
end should do (though I suppose if you're really paranoid you could
instead add a test everywhere it's used/incremented).
        regards, tom lane


pgsql-hackers by date:

Previous
From: Neil Conway
Date:
Subject: Re: postmaster segfaults with HUGE table
Next
From: Tom Lane
Date:
Subject: Re: postmaster segfaults with HUGE table