Thread: update db doesnt work
Hello, i have created a table with 1600 comlumns. Now i want to update some columns with new values: update db set col1=1 where col2=2; this generates the error-message: heap_formtuple: numberOfAttributes of 1601 > 1600. Any suggestions? Thanks, Dirk
On Mon, 27 May 2002, dirk heitmann wrote: > i have created a table with 1600 comlumns. Now i want > to update some columns with new values: > update db set col1=1 where col2=2; > > this generates the error-message: > heap_formtuple: numberOfAttributes of 1601 > 1600. > --8<-- Maximum number of columns in a table 1600 --8<-- http://www2.se.postgresql.org/users-lounge/limitations.html BR, Jani -- Jani Averbach
On 5/27/02 8:49 AM, "Jani Averbach" <jaa@cc.jyu.fi> wrote: > On Mon, 27 May 2002, dirk heitmann wrote: > >> i have created a table with 1600 comlumns. Now i want >> to update some columns with new values: >> update db set col1=1 where col2=2; >> >> this generates the error-message: >> heap_formtuple: numberOfAttributes of 1601 > 1600. >> > > --8<-- > Maximum number of columns in a table 1600 > --8<-- > > http://www2.se.postgresql.org/users-lounge/limitations.html > Does this limit only apply to a "defined table" (such as a table or view) ro does it also apply to the result of any select, for example a two-table join which would have 1601 output columns? -- sgl
Jani Averbach sez: } On Mon, 27 May 2002, dirk heitmann wrote: } > i have created a table with 1600 comlumns. Now i want } > to update some columns with new values: } > update db set col1=1 where col2=2; } > } > this generates the error-message: } > heap_formtuple: numberOfAttributes of 1601 > 1600. } } --8<-- } Maximum number of columns in a table 1600 } --8<-- } http://www2.se.postgresql.org/users-lounge/limitations.html Which, while accurate, fails to answer the answer the question. A column is not being added. A table is not being created. Rows are being updated and the system is complaining that there are too many columns. Why? } BR, Jani --Greg
On Mon, 2002-05-27 at 16:30, Gregory Seidman wrote: > Jani Averbach sez: > } On Mon, 27 May 2002, dirk heitmann wrote: > } > i have created a table with 1600 comlumns. Now i want > } > to update some columns with new values: > } > update db set col1=1 where col2=2; > } > > } > this generates the error-message: > } > heap_formtuple: numberOfAttributes of 1601 > 1600. > } > } --8<-- > } Maximum number of columns in a table 1600 > } --8<-- > } http://www2.se.postgresql.org/users-lounge/limitations.html > > Which, while accurate, fails to answer the answer the question. A column is > not being added. A table is not being created. Rows are being updated and > the system is complaining that there are too many columns. Why? > Don't know what the answer is, but it would be interesting to confirm whether PG thinks there are only 1600 columns. Try: select relnatts from pg_class where relname='db'; I assume that says 1600? (I was wondering about catalogue changes such as ADD COLUMN, but that does seem to check MaxHeapAttributeNumber) Regards John -- John Gray Azuli IT www.azuli.co.uk
Steve Lane <slane@fmpro.com> writes: > Does this limit only apply to a "defined table" (such as a table or view) ro > does it also apply to the result of any select, for example a two-table join > which would have 1601 output columns? It would apply to anything that forms a tuple, so yes a join output is restricted. The source-code comments may be illuminating: /* * MaxHeapAttributeNumber limits the number of (user) columns in a table. * The key limit on this value is that the size of the fixed overhead for * a tuple, plus the size of the null-values bitmap (at 1 bit per column), * plus MAXALIGN alignment, must fit into t_hoff which is uint8. On most * machines the absolute upper limit without making t_hoff wider would be * about 1700. Note, however, that depending on column data types you will * likely also be running into the disk-block-based limit on overall tuple * size if you have more than a thousand or so columns. TOAST won't help. */ #define MaxHeapAttributeNumber 1600 /* 8 * 200 */ I am not sure that we are careful to check natts <= MaxHeapAttributeNumber everywhere that we really should. It could be that you would see an error (or buggy behavior:-() in the join case only if there were actually some nulls in a created tuple. But IMHO the system ought to reject the attempt to form the join to begin with... regards, tom lane
Gregory Seidman <gss+pg@cs.brown.edu> writes: > Which, while accurate, fails to answer the answer the question. A column is > not being added. A table is not being created. Rows are being updated and > the system is complaining that there are too many columns. Why? Good question. How did you construct a table with 1601 columns? The system should not have let you do that. regards, tom lane
John Gray <jgray@azuli.co.uk> writes: > On Mon, 2002-05-27 at 16:30, Gregory Seidman wrote: >> Which, while accurate, fails to answer the answer the question. A column is >> not being added. A table is not being created. Rows are being updated and >> the system is complaining that there are too many columns. Why? > Don't know what the answer is, but it would be interesting to confirm > whether PG thinks there are only 1600 columns. Oh, wait, of course. The table has 1600 columns. But an UPDATE operation needs to form an intermediate tuple containing all 1600 data columns plus the CTID column. We should allow that, IMHO, which is another reason why heap_formtuple is not the place to be applying the restriction. Looks like there is some work to do here. regards, tom lane
Could it be that when the table is being created the PG is not adding the number of SYSTEM columns to the number of user columns before saying "Hey can't make the table you have too many columns!"??? Just a thought. Terry Fielder Network Engineer Great Gulf Homes / Ashton Woods Homes terry@greatgulfhomes.com > -----Original Message----- > From: pgsql-general-owner@postgresql.org > [mailto:pgsql-general-owner@postgresql.org]On Behalf Of Tom Lane > Sent: Monday, May 27, 2002 12:29 PM > To: gss+pg@cs.brown.edu > Cc: pgsql-general@postgresql.org > Subject: Re: [GENERAL] update db doesnt work > > > Gregory Seidman <gss+pg@cs.brown.edu> writes: > > Which, while accurate, fails to answer the answer the > question. A column is > > not being added. A table is not being created. Rows are > being updated and > > the system is complaining that there are too many columns. Why? > > Good question. How did you construct a table with 1601 columns? The > system should not have let you do that. > > regards, tom lane > > ---------------------------(end of > broadcast)--------------------------- > TIP 5: Have you checked our extensive FAQ? > > http://www.postgresql.org/users-lounge/docs/faq.html >
dirk heitmann <dirk.heitmann@fal.de> writes: > i have created a table with 1600 comlumns. Now i want > to update some columns with new values: > update db set col1=1 where col2=2; > this generates the error-message: > heap_formtuple: numberOfAttributes of 1601 > 1600. I've modified current development sources to allow this case to work. If you are interested in back-porting the patch into existing releases, modify src/include/access/htup.h as indicated below and change heap_formtuple in src/backend/access/common/heaptuple.c to test against MaxTupleAttributeNumber instead of MaxHeapAttributeNumber. (AFAICT all the other uses of MaxHeapAttributeNumber should stay the same.) regards, tom lane /* * MaxTupleAttributeNumber limits the number of (user) columns in a tuple. * The key limit on this value is that the size of the fixed overhead for * a tuple, plus the size of the null-values bitmap (at 1 bit per column), * plus MAXALIGN alignment, must fit into t_hoff which is uint8. On most * machines the upper limit without making t_hoff wider would be a little * over 1700. We use round numbers here and for MaxHeapAttributeNumber * so that alterations in HeapTupleHeaderData layout won't change the * supported max number of columns. */ #define MaxTupleAttributeNumber 1664 /* 8 * 208 */ /*---------- * MaxHeapAttributeNumber limits the number of (user) columns in a table. * This should be somewhat less than MaxTupleAttributeNumber. It must be * at least one less, else we will fail to do UPDATEs on a maximal-width * table (because UPDATE has to form working tuples that include CTID). * In practice we want some additional daylight so that we can gracefully * support operations that add hidden "resjunk" columns, for example * SELECT * FROM wide_table ORDER BY foo, bar, baz. * In any case, depending on column data types you will likely be running * into the disk-block-based limit on overall tuple size if you have more * than a thousand or so columns. TOAST won't help. *---------- */ #define MaxHeapAttributeNumber 1600 /* 8 * 200 */