Thread: Re: How to release locks

Re: How to release locks

From
"Qingqing Zhou"
Date:
"Andrus" <eetasoft@online.ee> wrote
> After killing client from XP task manager command
>
> DELETE FROM mytable
>
> runs forever.
>
> This is probably because pgAdmin shows that there are some locks placed
(see
> included picture).
>

It is not because of the locks. There is only running transaction (and it
got every locks). Not sure why the backend stucks there, seems it doesn't
aware of the broken client. In normal situations, if you killed a client,
then the server will print something like "could not receive data from
client" then exit the backend.

If you can attach to the problematic postgres process, that would be more
helpful to identify the problem.

Regards,
Qingqing



Re: How to release locks

From
"Andrus"
Date:
> It is not because of the locks. There is only running transaction (and it
> got every locks). Not sure why the backend stucks there, seems it doesn't
> aware of the broken client. In normal situations, if you killed a client,
> then the server will print something like "could not receive data from
> client" then exit the backend.
>
> If you can attach to the problematic postgres process, that would be more
> helpful to identify the problem.

My ODBC client uploads 9 MB binary file to server bytea field using 1 MB
binary hex encoded blocks using code something like:

START TRANSACTION;
DELETE FROM localfil WHERE LOWER(filename)='alguss.exe';
INSERT INTO localfil (filename,filedirect,BlockNumber,lastchange,contents)
     values( 'alguss.exe', 'algus', 1,CURRENT_TIMESTAMP, decode(
'1mbbinarystring', 'hex') );
INSERT INTO localfil (filename,filedirect,BlockNumber,lastchange,contents)
      values( 'alguss.exe', 'algus', 2,CURRENT_TIMESTAMP, decode(
'1mbbinarystring', 'hex') );
.... etc 8 insert statemens
COMMIT;

insert command takes a long time. I terminated the client application from
Windows XP client manager during insert command.
At next re-run application and pgadmin both hang in DELETE FROM line

I think I can reproduce this. Postgres server is in FreeBSD.
I can require server admin to do something after the problem occurs.

What should I do after reproducing the problem ?

Table structure is

CREATE TABLE public.localfil (

Filename CHAR(50),

FileDirect CHAR(8),

BlockNumber INTEGER,

lastchange timestamp NOT NULL,

Contents BYTEA NOT NULL,

PRIMARY KEY ( Filename, FileDirect, BlockNumber )

);

CREATE TRIGGER localfil_trig BEFORE INSERT OR UPDATE

ON localfil EXECUTE PROCEDURE setlastchange();

and table is empty.



Re: How to release locks

From
"Qingqing Zhou"
Date:
"Andrus" <eetasoft@online.ee> wrote
>
> insert command takes a long time. I terminated the client application from
> Windows XP client manager during insert command.
> At next re-run application and pgadmin both hang in DELETE FROM line
>
> I think I can reproduce this. Postgres server is in FreeBSD.
> I can require server admin to do something after the problem occurs.
>
> What should I do after reproducing the problem ?
>

A backtrace information would be most useful. Just build a debug verison
(--enable-debug) and restart the server, when you reproduce the problem,
attach gdb to it, then "bt".

Regards,
Qingqing



Re: How to release locks

From
"Andrus"
Date:
>> It is not because of the locks. There is only running transaction (and it
>> got every locks). Not sure why the backend stucks there, seems it doesn't
>> aware of the broken client. In normal situations, if you killed a client,
>> then the server will print something like "could not receive data from
>> client" then exit the backend.

I tried to kill the process using pgAdmin Status page Cancel button.
pgAdmin responds:

A cancel signal was sent to the selected server process(es).

However, process continues running.

So it seems that child process does not  respond to  cancel signal from main
process.

I can reproduce this only in production database. I in test database it
works OK.

Andrus.



Re: How to release locks

From
Tom Lane
Date:
"Andrus" <eetasoft@online.ee> writes:
> I tried to kill the process using pgAdmin Status page Cancel button.
> pgAdmin responds:
> A cancel signal was sent to the selected server process(es).
> However, process continues running.

What's it doing exactly?

We recently fixed some places in btree index creation where the thing
could run for a long time before noticing a cancel interrupt.  Maybe
you've found another one.

            regards, tom lane

Re: How to release locks

From
"Andrus"
Date:
>> A cancel signal was sent to the selected server process(es).
>> However, process continues running.
>
> What's it doing exactly?

Client was killed when it waits a 2 MB bytea string insert statement to be
completed inside transaction:

INSERT INTO localfil (filename,filedirect,BlockNumber,lastchange,contents)
     values( 'alguss.exe', 'algus', 1,CURRENT_TIMESTAMP, decode(
'1mbbinarystring', 'hex') );

odbc driver was in progress of sending data using libpq when client was
terminated.

> We recently fixed some places in btree index creation where the thing
> could run for a long time before noticing a cancel interrupt.  Maybe
> you've found another one.

localfil table is empty before insert (it has large bytea fields but all
rows are deleted before repro). So index (primary key) is single row.

Can it happen that  process running 2 mb bytea insert command will take
extremely long time to interrupt ?
Or will decode() function take long time to interrupt ?
There is also before insert or update trigger, maybe this influences ?
I'm currently uploading files using 1 mb binary data which are converted to
a 2 MB hex strings before uploading through odbc.

File is uploaded, stored and downloaded using localfil table in a number of
records each containing 1 MB of data.

Is it more reasonable to use smaller blocks to load and store files ?

Andrus.

My table strucure is

CREATE TABLE public.localfil (
Filename CHAR(50),
FileDirect CHAR(8),
BlockNumber INTEGER,
lastchange timestamp NOT NULL,
Contents BYTEA NOT NULL,
PRIMARY KEY ( Filename, FileDirect, BlockNumber )
);

CREATE TRIGGER localfil_trig BEFORE INSERT OR UPDATE
ON localfil EXECUTE PROCEDURE setlastchange();



Re: How to release locks

From
Tom Lane
Date:
"Andrus" <eetasoft@online.ee> writes:
>> What's it doing exactly?

> odbc driver was in progress of sending data using libpq when client was
> terminated.

Do you mean that the statement hadn't been fully transmitted yet?
If so, the backend would have just been waiting for the rest of the
statement to arrive.  Perhaps you're looking at some sort of network bug
that caused the backend not to be told that the connection was lost.

> Can it happen that  process running 2 mb bytea insert command will take
> extremely long time to interrupt ?
> Or will decode() function take long time to interrupt ?
> There is also before insert or update trigger, maybe this influences ?

You seem to have a lot of variables there --- maybe you should do some
experimenting to narrow down what the problem is.  I'd expect all of
those things to take some time, but not more than a few seconds...

            regards, tom lane

Re: How to release locks

From
"Andrus"
Date:
> Do you mean that the statement hadn't been fully transmitted yet?

Yes. Sending 2 MB takes most of time so client is terminated in this stage.

> If so, the backend would have just been waiting for the rest of the
> statement to arrive.  Perhaps you're looking at some sort of network bug
> that caused the backend not to be told that the connection was lost.

If so why pgAdmin Cancel command does not terminate the process ? If process
is waiting for data,
Cancel signal sent through pgAdmin should terminate process immediately.

Andrus.



Re: How to release locks

From
Jim Nasby
Date:
On Mar 22, 2006, at 7:14 PM, Andrus wrote:

>> Do you mean that the statement hadn't been fully transmitted yet?
>
> Yes. Sending 2 MB takes most of time so client is terminated in
> this stage.
>
>> If so, the backend would have just been waiting for the rest of the
>> statement to arrive.  Perhaps you're looking at some sort of
>> network bug
>> that caused the backend not to be told that the connection was lost.
>
> If so why pgAdmin Cancel command does not terminate the process ?
> If process
> is waiting for data,
> Cancel signal sent through pgAdmin should terminate process
> immediately.

If pgAdmin's cancel is just dropping the connection, the server might
take some time to notice it, especially if it's in the process of
running a query and doesn't have reason to talk to pgAdmin.
--
Jim C. Nasby, Database Architect                decibel@decibel.org
Give your computer some brain candy! www.distributed.net Team #1828

Windows: "Where do you want to go today?"
Linux: "Where do you want to go tomorrow?"
FreeBSD: "Are you guys coming, or what?"



Re: How to release locks

From
"Andrus"
Date:
I find that my killed connection server process process disappear in Status
after a long time.

> If pgAdmin's cancel is just dropping the connection, the server might
> take some time to notice it, especially if it's in the process of  running
> a query and doesn't have reason to talk to pgAdmin.

I think pgAdmin sends cancel command through postgres server main process.
So server
receives it immediately and sends it immediately to selected child process.

I don't have an idea what pgadmin cancel command exactly does.
After executing it processes remain visible in Status pane always.
I set log_statement = 'all'  in postgres.conf file but cancel command does
not wrote any entry to log file.

There is also Terminate button in Status panel but it is always disabled. I
havent found way to enable it.