Hi Mariel,
Please don't top post in the Postgresql groups.
On 9/5/2017 7:28 AM, Mariel Cherkassky wrote:
> 2017-08-31 16:24 GMT+03:00 George Neuner <gneuner2@comcast.net > <mailto:gneuner2@comcast.net>>: > >> One thing you might look at is the isolation level of the query. >> If you are using READ_COMMITTED or less, and the table is busy, >> other writing queries may be stepping on yours and even >> potentially changing your result set during the cursor loop. I >> would try using REPEATABLE_READ and see what happens. > > I didn't understand what do you mean by REPEATABLE_READ . I was referring to transaction isolation levels. When multiple transactions are running concurrently, the DBMS can (or not) prevent them from seeing changes made by one another. Consider 2 transactions A and B running concurrently:
T1: A reads table X
T2: B writes to table X
T3: B commits
T4: A reads table X again.
Depending on the isolation levels [and the specific query, obviously], A may or may not be able to see what changes B made to X.
The default isolation level in Postgresql is READ COMMITTED, which does allow transactions to see committed writes made by concurrently running transactions. REPEATABLE READ is a higher level of isolation which effectively takes a snapshot of the table(s) when they are 1st read, and guarantees that any further reads (e.g., by cursors) of the tables made by the transaction continue to see the same results.
My thought was that your loop may be running slowly because the table is being changed underneath your cursor. It may be better to pull the results into a temporary table and run your cursor loop over that.
For more information, see:
https://www.postgresql.org/docs/current/static/transaction-iso.htmlhttps://www.postgresql.org/docs/9.6/static/sql-begin.htmlhttps://stackoverflow.com/questions/6274457/set-isolation-level-for-postgresql-stored-procedures George