Re: [SQL] Table size limit? - Mailing list pgsql-sql

From Tom Lane
Subject Re: [SQL] Table size limit?
Date
Msg-id 18085.934467178@sss.pgh.pa.us
Whole thread Raw
In response to Table size limit?  ("Mike Field" <mike@fieldco.com>)
List pgsql-sql
"Mike Field" <mike@fieldco.com> writes:
> I'm inserting documents of around 80K each into a table - I split up the =
> content into chunks of 8K each (I use a web interface with PHP), and =
> insert the chunks into the following fields:
> docid, content1, content2, content3, content4, ... , content10

That will not work --- the 8K limitation is on the total size of a tuple
(record), not on the size of individual fields in it.  You could make a
table with fields "docid", "partno", "content" and then insert the data
of a document into multiple records with the same "docid" and varying
"partno".  It's a pain though.  I hope to see the 8K/tuple limitation go
away in the next release or two.

> Question:  will I ever run out of space in my table?  I'll have about =
> 450 documents to start, then it'll grow indefinitely. =20

The total size of the table is not a problem --- you'll run out of disk
space first.

> My other option, from what I hear, is to just have a field for each
> document with the URL of the file (either .txt or .html or whatever).

Yes, you could store the text in external files and just refer to those
files in the DB entries.  Not a real satisfactory solution, but it might
do as a workaround.  A variant on this is to keep the text in "large
objects" within the SQL DB, but those are hard to use; worse, they're
not real efficient if you have a lot of 'em.
        regards, tom lane


pgsql-sql by date:

Previous
From: "Mike Field"
Date:
Subject: Table size limit?
Next
From: Tom Lane
Date:
Subject: Re: [SQL] quest:optimization with indexes