Thread: tuples too big
Hi folks, I've got a problem - I need a little help. I'm using 6.5.3 from Debian stable. I've got a database, which has some fields in one table that need to hold a fair bit of text (say, 8000-10,000+ characters). Right now, I've got those fields as simple text. When I try to enter data into those fields, I get the "Tuple too large" error. I can't chop up the fields into smaller pieces, and 8104 is not enough. Suggestions? Michelle -- ------------ Michelle Murrain, Ph.D. President Norwottuck Technology Resources mpm@norwottuck.com http://www.norwottuck.com
Believe it or not we have the same problem with DB2 on AS/400 In PG you can create large objects which is the proper answer but there is another way... A memo table. This just contains a key to link up to the master record and a sequence and a long text field, e.g. 6K. Then write a function that automatically splits large amounts of text across rows in the memo file. Another function retrieves the data We expanded this to have a second key that can store more than one 'notebook' per record. This works well for things like notes and memo's but probably not so good for huge amounts of data. Its flexible and there's no hardcoded limit to the length of data. The front end application does the splitting/joining, we use VB but any language should be able to do it. OK, tell me this isn't technically elegant or correct but its simple and it works! E.G. CUSTOMER ACNO NAME 1 Smith CUSTOMER_M ACNO SEQ TEXT 1 1 lots of text 1 2 more text MC. Michelle Murrain <mpm@norwottuck.com> on 08/02/2001 14:46:12 Please respond to mpm@norwottuck.com To: pgsql-general@postgresql.org cc: (bcc: Martin Chantler/CIMG/CVG) Subject: [GENERAL] tuples too big Hi folks, I've got a problem - I need a little help. I'm using 6.5.3 from Debian stable. I've got a database, which has some fields in one table that need to hold a fair bit of text (say, 8000-10,000+ characters). Right now, I've got those fields as simple text. When I try to enter data into those fields, I get the "Tuple too large" error. I can't chop up the fields into smaller pieces, and 8104 is not enough. Suggestions? Michelle -- ------------ Michelle Murrain, Ph.D. President Norwottuck Technology Resources mpm@norwottuck.com http://www.norwottuck.com -- NOTICE: The information contained in this electronic mail transmission is intended by Convergys Corporation for the use of the named individual or entity to which it is directed and may contain information that is privileged or otherwise confidential. If you have received this electronic mail transmission in error, please delete it from your system without copying or forwarding it, and notify the sender of the error by reply email or by telephone (collect), so that the sender's address records can be corrected.
martin.chantler@convergys.com wrote: > This works well for things like notes and memo's but probably not so good > for > huge amounts of data. Its flexible and there's no hardcoded limit to the > length of data. Sounds something like TOAST, part of PostgreSQL 7.1, to be released soon. TOAST, however, is in the backend itself and is fully automatic. Hardcoded row limits on the amount of data per row (there is still a limit on the number of _columns_ in a row, but not on the size of each column) are gone in 7.1. -- Lamar Owen WGCR Internet Radio 1 Peter 4:11
> I've got a problem - I need a little help. I'm using 6.5.3 from Debian > stable. > I've got a database, which has some fields in one table that need to hold a > fair bit of text (say, 8000-10,000+ characters). Right now, I've got those > fields as simple text. [ Martin suggests breaking up the large values by hand ] Another answer is to update to 7.0.3, and redeclare your large fields as 'lztext'. This is a hack that is going away in 7.1 (it's superseded by the more general TOAST feature), but it should buy you the extra couple of K you need for now. A rough rule of thumb is that LZ compression will save a factor of 2 or so on chunks of text. A third possibility is to increase BLCKSZ from 8K to 32K, but that requires a rebuild from source, so you might not want to mess with that if you're accustomed to using RPMs. These two answers are obviously hacks, so I'd agree with Martin's approach if there were no better way on the horizon. But with 7.1 nearly out the door, I think it's silly to expend a lot of programming effort to split up and reassemble records; the need for that will go away as soon as you migrate to 7.1. What you want right now is a quick and easy stopgap. regards, tom lane
Well, I'd upgrade to at least 7.0.3 and change the BLCKSZ to 32k (in src/include/config.h.in before configure config.h after).. That will give you a 32k limit on tuple size. 7.1 eliminates the need for such things as TOAST does away with all tuple size limits.. If you can wait, it should be out fairly soon.. I've been using a 7.1 Beta 4 without a problem though I'm sure some bugs will to pop up before the stable release.. Good luck! -Mitch On Thu, 8 Feb 2001, Michelle Murrain wrote: > Hi folks, > > I've got a problem - I need a little help. I'm using 6.5.3 from Debian stable. > > I've got a database, which has some fields in one table that need to hold a > fair bit of text (say, 8000-10,000+ characters). Right now, I've got those > fields as simple text. > > When I try to enter data into those fields, I get the "Tuple too large" > error. > > I can't chop up the fields into smaller pieces, and 8104 is not enough. > > Suggestions? > > Michelle > -- > ------------ > Michelle Murrain, Ph.D. > President > Norwottuck Technology Resources > mpm@norwottuck.com > http://www.norwottuck.com >