Re: database-level lockdown - Mailing list pgsql-general

From Filipe Pina
Subject Re: database-level lockdown
Date
Msg-id 4173328336935884564@unknownmsgid
Whole thread Raw
In response to Re: database-level lockdown  (Melvin Davidson <melvin6925@gmail.com>)
Responses Re: database-level lockdown
Re: database-level lockdown
Re: database-level lockdown
List pgsql-general
Thanks for the suggestion. I read that some people do use that strategy for
maintenance sometimes but it's no feasible in this scenario.

I would have to disallow new connections AND kill all existing connections
(as there would be an existing connection pool), but this won't have the
same impact as using LOCKs..

Terminating all sessions will break every other transaction (except for the
one doing it). Locking database will put all the other on hold.
As we're talking about quick/instant operations on hold will have impact on
performance but won't cause anything to abort..

I really can't find any other solution for what I need (in short: make sure
no transactions are left out due to serialization failures)


On 03/07/2015, at 19:00, Melvin Davidson <melvin6925@gmail.com> wrote:

Wouldn't it be easier just to prevent connections to the database
 while your transaction is executed?
EG:
<Connect to your_database>
UPDATE pg_database
  SET datallowconn FALSE
 WHERE datname =3D 'your_database' ;
 START TRANSACTION;
<Do your_transaction>
  COMMIT;
UPDATE pg_database
  SET datallowconn TRUE
 WHERE datname =3D 'your_database' ;


On Fri, Jul 3, 2015 at 1:25 PM, Filipe Pina <filipe.pina@impactzero.pt>
wrote:

> So, as database level locks do not exist (as per
> https://wiki.postgresql.org/wiki/Lock_database), I've thought of having a
> function that would lock *every* table in the database (far from the same=
,
> but would probably work for me).
>
> Something like:
>
> CREATE OR REPLACE FUNCTION lockdown()
>     RETURNS void AS $$
> DECLARE
>   t information_schema.tables.table_name%TYPE;
> BEGIN
>   FOR t in SELECT table_name
>     FROM information_schema.tables
>   WHERE table_schema=3D'public'
>    AND table_type=3D'BASE TABLE'
>   LOOP
>     EXECUTE 'LOCK ' || t;
>   END LOOP;
> END
> $$
> LANGUAGE plpgsql;
>
> But the problem is that calling the function starts its own transaction
> and once it returns, locks are removed..
>
> Basically the workflow is (pseudo code coming from
> Django/python/psycopg2/external, not within pgsql):
>
> function generic_function_restarter(developer_function) {
>   # try 4 times to execute developer function and if all of them fail
>   # (due to SQLSTATE 40001 serialization failures),
>   # lock database and execute one last time
>   for 1 in [1..4] {
>     try {
>       call developer_function()
>       return 'success'
>     }
>     except SQLSTATE_40001 {
>       continue
>     }
>     except other_error {
>       return other_error
>     }
>
>   # only reaches here if all tries failed with SQLSTATE_40001
>   try {
>     START TRANSACTION
>     call lockdown()
>     call developer_function()
>     COMMIT TRANSACTION
>     return 'success'
>   }
>   except any_error {
>     # implicit ROLLBACK
>     return any_error
>   }
> }
>
> So, my problem here is that "call lockdown()" will place the locks and
> remove them upon returning... Is it possible to execute a function withou=
t
> creating a subtransaction?
>
> I could place the locks from the adapter directly at the outter
> transaction level but I have the feeling that performance would be worse.=
..
>
> Thanks,
> Filipe
>
> On Sex, Jun 12, 2015 at 5:25 , Filipe Pina <filipe.pina@impactzero.pt>
> wrote:
>
> Exactly, that=E2=80=99s why there=E2=80=99s a limit on the retry number. =
On the last try I
> wanted something like full lockdown to make sure the transaction will not
> fail due to serialiazation failure (if no other processes are touching th=
e
> database, it can=E2=80=99t happen). So if two transactions were retrying =
over and
> over, the first one to reach max_retries would activate that =E2=80=9Cglo=
bal lock=E2=80=9D
> making the other one wait and then the second one would also be able to
> successfully commit...
>
> On 11/06/2015, at 20:27, Tom Lane <tgl@sss.pgh.pa.us> wrote: Filipe Pina =
<
> filipe.pina@impactzero.pt> writes:
>
> It will try 5 times to execute each instruction (in case of
> OperationError) and in the last one it will raise the last error it
> received, aborting.
>
>  Now my problem is that aborting for the last try (on a restartable error
> - OperationalError code 40001) is not an option... It simply needs to get
> through, locking whatever other processes and queries it needs.
>
> I think you need to reconsider your objectives. What if two or more
> transactions are repeatedly failing and retrying, perhaps because they
> conflict? They can't all forcibly win. regards, tom lane
>
>


--=20
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.

pgsql-general by date:

Previous
From: Adrian Klaver
Date:
Subject: Re: return jsonb without field label
Next
From: John R Pierce
Date:
Subject: Re: database-level lockdown