Re: [pgsql-admin] "Soft-hitting" the 1600 column limit - Mailing list pgsql-admin

From David G. Johnston
Subject Re: [pgsql-admin] "Soft-hitting" the 1600 column limit
Date
Msg-id CAKFQuwbuurPFxQ=ts2znAC-STpF9AW7C91qKXsGThWQr8u4Faw@mail.gmail.com
Whole thread Raw
In response to [pgsql-admin] "Soft-hitting" the 1600 column limit  (nunks <nunks.lol@gmail.com>)
Responses Re: [pgsql-admin] "Soft-hitting" the 1600 column limit  (Scott Ribe <scott_ribe@elevated-dev.com>)
List pgsql-admin
On Wed, Jun 6, 2018 at 9:39 AM, nunks <nunks.lol@gmail.com> wrote:
I reproduced this behavior in PostgreSQL 10.3 with a simple bash loop and a two-column table, one of which is fixed and the other is repeatedly dropped and re-created until the 1600 limit is reached.

To me this is pretty cool, since I can use this limit as leverage to push the developers to the right path, but should Postgres be doing that? It's as if it doesn't decrement some counter when a column is dropped.

​This is working as expected.  When dropping a column, or adding a new column that can contain nulls, PostgreSQL does not, and does not want to, rewrite the physically stored records/table.  Thus it must be capable of accepting records formed for prior table versions which means it must keep track of those now-deleted columns.​

I'm sure that there is more to it that requires reading, and understanding, the source code to comprehend; but that does seem to explain why its works the way it does.

David J.

pgsql-admin by date:

Previous
From: nunks
Date:
Subject: [pgsql-admin] "Soft-hitting" the 1600 column limit
Next
From: Tom Lane
Date:
Subject: Re: [pgsql-admin] "Soft-hitting" the 1600 column limit