I had a problem with a Postgresql 9.3.5 on 32 bit linux, old 2.6.26 kernel:
Program: pg_dump Problem: if you have tables with big blob fields and you try to dump them with --inserts you could get errors like pg_dump: [archiver (db)] query failed: lost synchronization with server: got message type "D", length 847712348 pg_dump: [archiver (db)] query was: FETCH 100 FROM _pg_dump_cursor or pg_dump: [archiver (db)] query failed: ERROR: out of memory DETAIL: Failed on request of size 1073741823. pg_dump: [archiver (db)] query was: FETCH 100 FROM _pg_dump_cursor
I have solve it adding two new parameters, --custom-fetch-table and --custom-fetch-value, to fetch less records for the specified table(s). This does not completely solve the problem, but it helps you to get more chance to be able to dump your database.
I haven't tested the documentation: too many problems while building it (also the original version, without my changes; probably I have bogus tools... and too less time to check/try...).
Attached the patches for the master and REL9_6_STABLE.