Re: Re: Fatal Error : Invalid Memory alloc request size 1236252631 - Mailing list pgsql-general

From Pavel Stehule
Subject Re: Re: Fatal Error : Invalid Memory alloc request size 1236252631
Date
Msg-id CAFj8pRA4kOVANXa9dzX80_ChQ=itDJ1v2DmcXXvMB4xQVkhQ8w@mail.gmail.com
Whole thread Raw
In response to Aw: Re: Fatal Error : Invalid Memory alloc request size 1236252631  (Karsten Hilbert <Karsten.Hilbert@gmx.net>)
List pgsql-general
Hi

čt 17. 8. 2023 v 16:48 odesílatel Karsten Hilbert <Karsten.Hilbert@gmx.net> napsal:
 
Even I used postgreSQL Large Objects by referring this link to store and retrieve large files (As bytea not working)
https://www.postgresql.org/docs/current/largeobjects.html
 
But even now I am unable to fetch the data at once from large objects
 
select lo_get(oid);
 
Here I'm getting the same error message.
 
But if I use select data from pg_large_object where loid = 49374
Then I can fetch the data but in page wise (data splitting into rows of each size 2KB)
 
So, here how can I fetch the data at single step rather than page by page without any error.

SQL functionality is limited by 1GB

You should to use \lo_import or \lo_export commands


regards

Pavel

 
And I'm just wondering how do many applications storing huge amount of data in GBs? I know that there is 1GB limit for each field set by postgreSQL. If so, how to deal with these kind of situations? Would like to know about this to deal with real time scenarios.



https://github.com/lzlabs/pg_dumpbinary/blob/master/README.md
might be of help

Karsten


pgsql-general by date:

Previous
From: Marc Millas
Date:
Subject: shared buffers
Next
From: Alvaro Herrera
Date:
Subject: Re: Schema renaming cascade