reducing IO and memory usage: sending the content of a table to multiple files - Mailing list pgsql-general

From Ivan Sergio Borgonovo
Subject reducing IO and memory usage: sending the content of a table to multiple files
Date
Msg-id 20090402112002.1e7b5e43@dawn.webthatworks.it
Whole thread Raw
Responses Re: reducing IO and memory usage: sending the content of a table to multiple files
List pgsql-general
This is the work-flow I've in mind:

1a) take out *all* data from a table in chunks (M record for each
file, one big file?) (\copy??, from inside a scripting language?)

2a) process each file with awk to produce N files very similar each
other (substantially turn them into very simple xml)
3a) gzip them

2b) use any scripting language to process and gzip them avoiding a
bit of disk IO

Does PostgreSQL offer me any contrib, module, technique... to save
some IO (and maybe disk space for temporary results?).

Are there any memory usage implication if I'm doing a:
pg_query("select a,b,c from verylargetable; --no where clause");
vs.
the \copy equivalent
any way to avoid them?

thanks

--
Ivan Sergio Borgonovo
http://www.webthatworks.it


pgsql-general by date:

Previous
From: Craig Ringer
Date:
Subject: Re: possible small contribution to the PostgreSQL manual? Example for two-phase commit section.
Next
From: linnewbie
Date:
Subject: Posgres Adding braces at beginning and end of text (html) content