Re: database-level lockdown - Mailing list pgsql-general

From Filipe Pina
Subject Re: database-level lockdown
Date
Msg-id 1435944321.27615.2@smtp.gmail.com
Whole thread Raw
In response to Re: database-level lockdown  (Filipe Pina <filipe.pina@impactzero.pt>)
Responses Re: database-level lockdown  (Melvin Davidson <melvin6925@gmail.com>)
List pgsql-general
So, as database level locks do not exist (as per https://wiki.postgresql.org/wiki/Lock_database), I've thought of having a function that would lock *every* table in the database (far from the same, but would probably work for me).

Something like:

CREATE OR REPLACE FUNCTION lockdown()
    RETURNS void AS $$
DECLARE
  t information_schema.tables.table_name%TYPE;
BEGIN
  FOR t in SELECT table_name
    FROM information_schema.tables
  WHERE table_schema='public'
   AND table_type='BASE TABLE'
  LOOP
    EXECUTE 'LOCK ' || t;
  END LOOP;
END
$$
LANGUAGE plpgsql;

But the problem is that calling the function starts its own transaction and once it returns, locks are removed..

Basically the workflow is (pseudo code coming from Django/python/psycopg2/external, not within pgsql):

function generic_function_restarter(developer_function) {
  # try 4 times to execute developer function and if all of them fail
  # (due to SQLSTATE 40001 serialization failures),
  # lock database and execute one last time
  for 1 in [1..4] {
    try {
      call developer_function()
      return 'success'
    }
    except SQLSTATE_40001 {
      continue
    }
    except other_error {
      return other_error
    }
      
  # only reaches here if all tries failed with SQLSTATE_40001
  try {
    START TRANSACTION
    call lockdown()
    call developer_function()
    COMMIT TRANSACTION
    return 'success'
  }
  except any_error {
    # implicit ROLLBACK
    return any_error
  }
}

So, my problem here is that "call lockdown()" will place the locks and remove them upon returning... Is it possible to execute a function without creating a subtransaction?

I could place the locks from the adapter directly at the outter transaction level but I have the feeling that performance would be worse...

Thanks,
Filipe

On Sex, Jun 12, 2015 at 5:25 , Filipe Pina <filipe.pina@impactzero.pt> wrote:
Exactly, that’s why there’s a limit on the retry number. On the last try I wanted something like full lockdown to make sure the transaction will not fail due to serialiazation failure (if no other processes are touching the database, it can’t happen). So if two transactions were retrying over and over, the first one to reach max_retries would activate that “global lock” making the other one wait and then the second one would also be able to successfully commit...
On 11/06/2015, at 20:27, Tom Lane <tgl@sss.pgh.pa.us> wrote: Filipe Pina <filipe.pina@impactzero.pt> writes:
It will try 5 times to execute each instruction (in case of OperationError) and in the last one it will raise the last error it received, aborting.
Now my problem is that aborting for the last try (on a restartable error - OperationalError code 40001) is not an option... It simply needs to get through, locking whatever other processes and queries it needs.
I think you need to reconsider your objectives. What if two or more transactions are repeatedly failing and retrying, perhaps because they conflict? They can't all forcibly win. regards, tom lane

pgsql-general by date:

Previous
From: "David G. Johnston"
Date:
Subject: Re: Unusual sorting requirement (mixed enum/non-enum) - need thoughts
Next
From: Melvin Davidson
Date:
Subject: Re: database-level lockdown