Breaking up a PostgreSQL COPY command into chunks? - Mailing list pgsql-general

From Victor Hooi
Subject Breaking up a PostgreSQL COPY command into chunks?
Date
Msg-id CAMnnoU+2=ZjcouydmqAXu7dG0HzjVF5mV3+-bs3icWu9-TV=DQ@mail.gmail.com
Whole thread Raw
Responses Re: Breaking up a PostgreSQL COPY command into chunks?
List pgsql-general
Hi,

We're using psycopg2 with COPY to dump CSV output from a large query.

The actual SELECT query itself is large (both in number of records/columns, and also in width of values in columns), but still completes in around under a minute on the server.

However, if you then use a COPY with it, it will often time out.

We're using psycopg2 to run the command, the trace we get is something like:

Traceback (most recent call last):
 File "foo.py", line 259, in <module>
   jobs[job].run_all()
 File "foo.py", line 127, in run_all
   self.export_to_csv()
 File "foo.py", line 168, in export_to_csv
   cur.copy_expert(self.export_sql_statement, f)
psycopg2.extensions.TransactionRollbackError: canceling statement due to conflict with recovery
DETAIL:  User was holding shared buffer pin for too long.

My question is, what are some simple ways we can use to chunk up the query?

Could we pull down a list of all the ids (auto-incrementing int), break this list up, then use a WHERE clause to break it up, running multiple COPY commands?

Or would it be better to use LIMIT/OFFSET to break it up? I'm not sure how we'd figure out when we reached the end of the results set though (apart from just counting the results?).

Or are there other approaches you guys could recommend?

Cheers,
Victor

pgsql-general by date:

Previous
From: Adrian Klaver
Date:
Subject: Re: Is it advisable to pg_upgrade directly from 9.0 to 9.3?
Next
From: wd
Date:
Subject: Re: Breaking up a PostgreSQL COPY command into chunks?