Re: [HACKERS] [ patch ] pg_dump: new --custom-fetch-table and--custom-fetch-value parameters - Mailing list pgsql-hackers

From Andrea Urbani
Subject Re: [HACKERS] [ patch ] pg_dump: new --custom-fetch-table and--custom-fetch-value parameters
Date
Msg-id trinity-b16e6d2c-09bf-4512-b5c8-9b4fd81268cb-1486824979231@3capp-mailcom-lxa05
Whole thread Raw
In response to Re: [HACKERS] [ patch ] pg_dump: new --custom-fetch-table and--custom-fetch-value parameters  (Robert Haas <robertmhaas@gmail.com>)
Responses Re: [HACKERS] [ patch ] pg_dump: new --custom-fetch-table and--custom-fetch-value parameters
List pgsql-hackers
I'm a beginner here... anyway I try to share my ideas.

My situation is changed in a worst state: I'm no more able to make a pg_dump neither with my custom fetch value (I have
tried"1" as value = one row at the time) neither without the "--column-inserts": 

pg_dump: Dumping the contents of table "tDocumentsFiles" failed: PQgetResult() failed.
pg_dump: Error message from server: ERROR:  out of memory
DETAIL:  Failed on request of size 1073741823.
pg_dump: The command was: COPY public."tDocumentsFiles" ("ID_Document", "ID_File", "Name", "FileName", "Link", "Note",
"Picture","Content", "FileSize", "FileDateTime", "DrugBox", "DrugPicture", "DrugInstructions") TO stdout; 

I don't know if the Kyotaro Horiguchi patch will solve this, because, again, I'm not able to get neither one single
row.
Similar problem trying to read and to write the bloab fields with my program.
Actually I'm working via pieces:
Read r1) I get the length of the bloab field r2) I check the available free memory (on the client pc) r3) I read pieces
ofthe bloab field, according to the free memory, appending them to a physical file 
Write w1) I check the length of the file to save inside the bloab w2) I check the available free memory (on the client
pc)w3) I create a temporary table on the server w4) I add lines to this temporary table, writing pieces of the file
accordingto the free memory w5) I ask the server to write, inside the final bloab field, the concatenation of the rows
ofthe temporary data 
The read and write is working now.
Probably the free memory check should be done on both sides (client and server [does a function/view with the available
freememory exist?]) taking the smallest one. 
What do you think to use a similar approach in the pg_dump?
a) go through the table getting the size of each row / fields
b) when the size of the row or of the field is bigger than the value (provided or stored somewhere), read pieces of the
fieldtill the end 

PS: I have see there are the "large object" that can work via streams. My files are actually not bigger than 1Gb, but,
ok,maybe in the future I will use them instead of the bloabs. 

Thank you 
Andrea



pgsql-hackers by date:

Previous
From: Ashutosh Sharma
Date:
Subject: Re: [HACKERS] Should we cacheline align PGXACT?
Next
From: Robert Haas
Date:
Subject: Re: [HACKERS] Checksums by default?