CLOB & BLOB limitations in PostgreSQL - Mailing list pgsql-general

From Jack.O'Sullivan@tessella.com
Subject CLOB & BLOB limitations in PostgreSQL
Date
Msg-id OF312B1274.D3B65F45-ON80257CB7.004BA4DA-80257CB7.00510BD0@tessella.co.uk
Whole thread Raw
Responses Re: CLOB & BLOB limitations in PostgreSQL  (Andy Colson <andy@squeakycode.net>)
Re: CLOB & BLOB limitations in PostgreSQL  (Albe Laurenz <laurenz.albe@wien.gv.at>)
Re: CLOB & BLOB limitations in PostgreSQL  (Ivan Voras <ivoras@freebsd.org>)
List pgsql-general
I am working for a client who is interested in migrating from Oracle to Postgres. Their database is currently ~20TB in size, and is growing. The biggest table in this database is effectively a BLOB store and currently has around 1 billion rows.

From reading around Postgres, there are a couple of limits which are concerning in terms of being able to migrate this database. We are not up against these limits just yet, but it is likely that they will be a potential blocker within the next few years.

1) Table can be maximum of 32TB  (http://www.postgresql.org/about/)

2) When storing bytea or text datatypes there is a limit of 4 billion entries per table (https://wiki.postgresql.org/wiki/BinaryFilesInDB)

With both of these, are they hard limits or can they be worked around with partitioning of tables? Could we set the table up in such a way that each child table was limited, but there was no limit on the number of children?

With point two, does this mean that any table with a bytea datatype is limited to 4 billion rows (which would seem in conflict with the "unlimited rows" shown by http://www.postgresql.org/about)? If we had rows where the bytea was a "null" entry would they contribute towards this total or is it 4 billion non-null entries?

Thanks.

pgsql-general by date:

Previous
From: Achilleas Mantzios
Date:
Subject: Re: Linux vs FreeBSD
Next
From: Andy Colson
Date:
Subject: Re: efficient way to do "fuzzy" join