Thread: Avg/max size of these JSON docs in Postgres
Hi Team,
Confirm what is Avg/max size of these JSON docs in Postgres.
Regards,
Ram Pratap.
On Tue, 12 Oct 2021 at 08:14, Ram Pratap Maurya <ram.maurya@lavainternational.in> wrote: > Confirm what is Avg/max size of these JSON docs in Postgres. JSON and JSONB datatypes can both be max 1GB in size. -- Simon Riggs http://www.EnterpriseDB.com/
On Tue, Oct 12, 2021 at 1:43 AM Simon Riggs <simon.riggs@enterprisedb.com> wrote:
On Tue, 12 Oct 2021 at 08:14, Ram Pratap Maurya
<ram.maurya@lavainternational.in> wrote:
> Confirm what is Avg/max size of these JSON docs in Postgres.
JSON and JSONB datatypes can both be max 1GB in size.
That is per row.
Just to confirm- The compression associated with TOAST is only if the value can fit in line, right? Columns don't get stored out-of-line in a toast table as a compressed value as I read the documentation. I suppose that would make reads a bit crazy.
Michael Lewis <mlewis@entrata.com> writes: > Just to confirm- The compression associated with TOAST is only if the value > can fit in line, right? Columns don't get stored out-of-line in a toast > table as a compressed value as I read the documentation. I suppose that > would make reads a bit crazy. The default behavior is first to compress any large field value(s), and then if the row is still too big, push them out-of-line in compressed form. You can modify that strategy on a per-column basis if need be, but I've seldom heard good reasons to. regards, tom lane
On Tue, 12 Oct 2021 at 18:53, Michael Lewis <mlewis@entrata.com> wrote: > > On Tue, Oct 12, 2021 at 1:43 AM Simon Riggs <simon.riggs@enterprisedb.com> wrote: >> >> On Tue, 12 Oct 2021 at 08:14, Ram Pratap Maurya >> <ram.maurya@lavainternational.in> wrote: >> >> > Confirm what is Avg/max size of these JSON docs in Postgres. >> >> JSON and JSONB datatypes can both be max 1GB in size. > > That is per row. No, that is per column. -- Simon Riggs http://www.EnterpriseDB.com/
On Tue, Oct 12, 2021 at 12:51 PM Simon Riggs <simon.riggs@enterprisedb.com> wrote:
On Tue, 12 Oct 2021 at 18:53, Michael Lewis <mlewis@entrata.com> wrote:
>
> On Tue, Oct 12, 2021 at 1:43 AM Simon Riggs <simon.riggs@enterprisedb.com> wrote:
>>
>> On Tue, 12 Oct 2021 at 08:14, Ram Pratap Maurya
>> <ram.maurya@lavainternational.in> wrote:
>>
>> > Confirm what is Avg/max size of these JSON docs in Postgres.
>>
>> JSON and JSONB datatypes can both be max 1GB in size.
>
> That is per row.
No, that is per column.
Yes, sorry. My attempt at clarification only muddled things. Each column within each row can be up to 1GB in size is how I understand the limit.
On 10/12/21 21:21, Michael Lewis wrote: > On Tue, Oct 12, 2021 at 12:51 PM Simon Riggs > <simon.riggs@enterprisedb.com <mailto:simon.riggs@enterprisedb.com>> wrote: > > On Tue, 12 Oct 2021 at 18:53, Michael Lewis <mlewis@entrata.com > <mailto:mlewis@entrata.com>> wrote: > > > > On Tue, Oct 12, 2021 at 1:43 AM Simon Riggs > <simon.riggs@enterprisedb.com <mailto:simon.riggs@enterprisedb.com>> > wrote: > >> > >> On Tue, 12 Oct 2021 at 08:14, Ram Pratap Maurya > >> <ram.maurya@lavainternational.in > <mailto:ram.maurya@lavainternational.in>> wrote: > >> > >> > Confirm what is Avg/max size of these JSON docs in Postgres. > >> > >> JSON and JSONB datatypes can both be max 1GB in size. > > > > That is per row. > > No, that is per column. > > > Yes, sorry. My attempt at clarification only muddled things. Each column > within each row can be up to 1GB in size is how I understand the limit. But you're kinda right, actually. It very much is per-tuple, because in various places we form tuples with all the data inline. Consider for example this: create table t (a text, b text); alter table t alter column a set storage extended; alter table t alter column b set storage extended; insert into t select repeat(md5('a'), 512*1024*1024/32), repeat(md5('b'), 512*1024*1024/32); ERROR: invalid memory alloc request size 1073741880 Clearly, both values are only 512MB (no compression). Yet it fails, simply because tts_virtual_copy_heap_tuple calls heap_form_tuple to form a tuple with all values and also the tuple header. But this succeeds, because the values are 64B shorter, leaving enough space for the tuple header etc. insert into t select repeat(md5('a'), 512*1024*1024/32 - 2), repeat(md5('b'), 512*1024*1024/32); And you can even select the data: select * from t; You can even do this: update t set a = repeat(md5('a'), 512*1024*1024/32); which works, so now you have a row with two 512MB values. But then you'll face this: select * from t; ERROR: out of memory DETAIL: Cannot enlarge string buffer containing 536870922 bytes by 536870912 more bytes. because printtup() builds a huge string with all the data (and if the columns had compression, this would be decompressed, because it goes to the client). So yeah, there's an explicit 1GB limit per value, but having rows close to the 1GB limit is going to cause all sorts of unpredictable and rather painful issues :-( regards -- Tomas Vondra EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
On Tue, 12 Oct 2021 at 23:07, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote: > So yeah, there's an explicit 1GB limit per value, but having rows close > to the 1GB limit is going to cause all sorts of unpredictable and rather > painful issues :-( Sounds worth mentioning in doc/src/sgml/limits.sgml -- Simon Riggs http://www.EnterpriseDB.com/