Re: Row Limit? - Mailing list pgsql-general
From | Tom Lane |
---|---|
Subject | Re: Row Limit? |
Date | |
Msg-id | 16482.1006824306@sss.pgh.pa.us Whole thread Raw |
In response to | Row Limit? ("Roderick A. Anderson" <raanders@tincan.org>) |
Responses |
Re: Row Limit?
|
List | pgsql-general |
"Roderick A. Anderson" <raanders@tincan.org> writes: > I'm in the process of convincing a developer to not use MySQL for a web > based application. While trolling around several sites I noticed the > 1600 attributes (columns) limit of PostgreSQL. Now I can't even beging > to think of an application that would need that many I can see the MySQL > inclined comparing it to their 2945. Since PostgreSQL has joins I was > wondering if this limit applies to physical tables only or are views > limited to 1600 attributes. The relevant source code comments are in htup.h: /* * MaxHeapAttributeNumber limits the number of (user) columns in a table. * The key limit on this value is that the size of the fixed overhead for * a tuple, plus the size of the null-values bitmap (at 1 bit per column), * plus MAXALIGN alignment, must fit into t_hoff which is uint8. On most * machines the absolute upper limit without making t_hoff wider would be * about 1700. Note, however, that depending on column data types you will * likely also be running into the disk-block-based limit on overall tuple * size if you have more than a thousand or so columns. TOAST won't help. */ #define MaxHeapAttributeNumber 1600 /* 8 * 200 */ So the hard limit here really is on how many columns can be put into an instantiated tuple. A view might possibly be defined to have more columns, so long as it was used in a way that didn't ever require forming a tuple of more than MaxHeapAttributeNumber columns (eg, you didn't ever select all the columns at once). But you'd have to persuade heap_create_with_catalog not to reject the view definition. The fact that attnums are int2 would limit you to 32K columns in the view in any case, though. In practice all of these limits could be raised easily if one was sufficiently intent on doing it, and didn't mind blowing disk space for it. (For example, widening t_hoff would probably cost you either four or eight bytes for *every tuple* in the database, depending on MAXALIGN for your hardware.) The existing limits don't seem to be a problem in practical use, so I'm not very excited about doing that. It might also be worth noting that both Postgres and MySQL would probably start to show severe performance problems long before reaching their nominal limits. I'm sure there are plenty of inner loops in both products that would start to chew up unpleasant amounts of time with a few thousand columns to loop over ... so making the world truly safe for ten-thousand-column tables would likely require some work on finding and fixing inefficiencies, too. Of course, the real bottom line is that if you think you need more than order-of-a-hundred columns, your database design probably needs revision anyway ;-) regards, tom lane
pgsql-general by date: