Thread: dblink question please

dblink question please

From
"Frankie Lam"
Date:
Hello,

Does anyone know if there's a simple way that let dblink function calls in a
plpgsql function wouldn't raise exception, in case there's sth wrong.
(e.g. cannot connect to the remote host ......)

I want this because I want to have more controls over plpgsql function, as
plpgsql only offers a very limited error exception handler.

Regards, Frankie.




Re: dblink question please

From
Joe Conway
Date:
Frankie Lam wrote:
> Does anyone know if there's a simple way that let dblink function calls in a
> plpgsql function wouldn't raise exception, in case there's sth wrong.
> (e.g. cannot connect to the remote host ......)

Not without hacking dblink.c.

At quick glance it looks like it might be reasonably safe to use
dblink_connect with the lines:

8<---------------------------------
if (PQstatus(persistent_conn) == CONNECTION_BAD)
{   msg = pstrdup(PQerrorMessage(persistent_conn));   PQfinish(persistent_conn);   persistent_conn = NULL;
elog(ERROR,"dblink_connect: connection error: %s", msg);
 
}
8<---------------------------------

changed to something like (untested)

8<---------------------------------
if (PQstatus(persistent_conn) == CONNECTION_BAD)
{   msg = pstrdup(PQerrorMessage(persistent_conn));   PQfinish(persistent_conn);   persistent_conn = NULL;
elog(NOTICE,"dblink_connect: connection error: %s", msg);   result_text = DatumGetTextP(DirectFunctionCall1(textin,
                            CStringGetDatum("ERROR")));   PG_RETURN_TEXT_P(result_text);
 
}
8<---------------------------------

It would be more complex if you want to not use the persistent connection.

HTH,

Joe




Re: dblink question please

From
"Frankie Lam"
Date:
Yah!

That's exactly what I did already(did this to all lines containing
`elog(ERROR...)'. :-D
But, I'm still experiencing some error messages when I'm doing 'extreme'
test to
plpgsql functions,
for example 'ExecMakerTableFunctionResult: Invalid result from function
returning tuple'.

I really have no idea about what this message is related to the dblink
functions. Any idea
about it? Thanks so much.


Regards, Frankie.
"Frankie Lam" <frankie@ucr.com.hk> wrote in message
news:b2etmt$26en$1@news.hub.org...
> Hello,
>
> Does anyone know if there's a simple way that let dblink function calls in
a
> plpgsql function wouldn't raise exception, in case there's sth wrong.
> (e.g. cannot connect to the remote host ......)
>
> I want this because I want to have more controls over plpgsql function, as
> plpgsql only offers a very limited error exception handler.
>
> Regards, Frankie.
>
>




Re: dblink question please

From
Joe Conway
Date:
Frankie Lam wrote:
> Yah!
> 
> That's exactly what I did already(did this to all lines containing
> `elog(ERROR...)'. :-D
> But, I'm still experiencing some error messages when I'm doing 'extreme'
> test to
> plpgsql functions,
> for example 'ExecMakerTableFunctionResult: Invalid result from function
> returning tuple'.
> 
> I really have no idea about what this message is related to the dblink
> functions. Any idea
> about it? Thanks so much.

Sounds like you didn't heed my warning. It is *probably* safe to make 
the elog(ERROR...) to elog(NOTICE...) change *in that one function*, 
dblink_connect(), and then use persistent connections, e.g.

select dblink_connect('dbname=blah ...');
select * from dblink('select f1 from foo') as (f1 text);

If instead you made the change within the dblink_record() function, and 
you weren't careful to add additional error handling, then the message 
you got is what I'd expect. You basically broke dblink for those scenarios.

Joe



Re: dblink question please

From
"Frankie Lam"
Date:
Thanks, Joe.

Now I use only persistent connection inside my PLPGSQL functions,
the EXCEPTION "ExecMakerTableFunctionResult: Invalid result from function
returning tuple" has gone.

The only problem left behind is it takes dblink function (such as
dblink_exec('update foo set f1=true'))
very long time to return(abuot 16 mins), in case if I do "extreme" test on
it(like unplugging the network
cable while dblink_exec() function is still working on remote host).

It seems to me this is a matter of libPQ (because connect_timeout doesn't
work in the case), but
someone told me this is nothing to do with libPQ, and it's possible a matter
of KERNEL and transport
layer of tcp protocol.(I don't really understand these stuff) Is this
true?(If this is true, then I have to
abort my project :-( )

Regards Frankie.

"Joe Conway" <mail@joeconway.com> wrote in message
news:3E4B2BA5.3020409@joeconway.com...
> Frankie Lam wrote:
> > Yah!
> >
> > That's exactly what I did already(did this to all lines containing
> > `elog(ERROR...)'. :-D
> > But, I'm still experiencing some error messages when I'm doing 'extreme'
> > test to
> > plpgsql functions,
> > for example 'ExecMakerTableFunctionResult: Invalid result from function
> > returning tuple'.
> >
> > I really have no idea about what this message is related to the dblink
> > functions. Any idea
> > about it? Thanks so much.
>
> Sounds like you didn't heed my warning. It is *probably* safe to make
> the elog(ERROR...) to elog(NOTICE...) change *in that one function*,
> dblink_connect(), and then use persistent connections, e.g.
>
> select dblink_connect('dbname=blah ...');
> select * from dblink('select f1 from foo') as (f1 text);
>
> If instead you made the change within the dblink_record() function, and
> you weren't careful to add additional error handling, then the message
> you got is what I'd expect. You basically broke dblink for those
scenarios.
>
> Joe
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 6: Have you searched our list archives?
>
> http://archives.postgresql.org




Re: dblink question please

From
Joe Conway
Date:
Frankie Lam wrote:
> Now I use only persistent connection inside my PLPGSQL functions, the
> EXCEPTION "ExecMakerTableFunctionResult: Invalid result from function
>  returning tuple" has gone.

Good.

> It seems to me this is a matter of libPQ (because connect_timeout
> doesn't work in the case), but someone told me this is nothing to do
> with libPQ, and it's possible a matter of KERNEL and transport layer
> of tcp protocol.(I don't really understand these stuff) Is this 
> true?(If this is true, then I have to abort my project :-( )
> 

I'm afraid the person who gave you that answer knows much better than I. 
Perhaps there is some kernel tcp parameter you can adjust? Or maybe 
dblink just isn't an appropriate solution given your requirements. You 
haven't described what exactly you are trying to do, and what exactly 
are your requirements, so it's a bit hard to help there.

Joe



Re: dblink question please

From
"Frankie Lam"
Date:
Many thanks Joe.

I think my problem is quite typical one, let me try my best to describe the
project.

The project is about to build a "twin-database" system, which requires data
to be replicated
synchronously(only ONE single active master accepts request though), and
supports
"fail-over"(if one database server is crashed for some reason, then at later
time when it is up,
it should get synchronized with the current active master.)

The two servers' role can be changed, slave -> active master and master ->
slave are possible.

(I use dblink() functions to connect to the remote host)
8 possible cases identified on what it might happen and how to cope with it:

***********************************************************************
Case 1: Both OK, clients connect to S1.
Update synchronously.

Case 2: S1 OK, S2 failed, clients connect to S1
Update S1 only, mark S2 as unavailable.

Case 3: S1 failed, S2 OK, clients connect to S1.
Clients timed out, clients then connect to S2 (this has to ben hard-coded in
client codes),
S2 connects to S1 but failed,
upgrade S2 to master,
mark S1 as unavailable.
Update S1 only

Case 4: Both OK, clients connect to S2
S2 checks S1's status in S2's database.
S1 available and S2 is able to connect to S1
Return error (S2's not currently the master)
Clients then connect to S1.

Case 5: (extends case 3) S1 failed, S2 OK and then S1 OK
At the beginning, this is handled like case 3
When S1 UP, client should continue to connect to S2.
If a new client connects to S1, S1 check its current status, is master,
but its status in S2 is slave.
Then downgrade S1 to slave and keep S2's database data unchanged.
S1 and then should return error (coz it's not the master).
Finally clients should connect to S2.

Case 6: Case 5 + DB sync. completed.
External Program Lock exclusively (don't even allow others to read from it)
the status table
of S2 (currently the master).Downgrade S2 to slave and upgrade S1 to master
in S2.Then
upgrade S1 to master and downgrade S2 to slave in S1. Unlock the table.
During the process, if clients connect to S1, return error ( coz it's slave
now).
If clients connect to S2, wait for timeout. (coz its status table has been
locked.)
�o Unlock table, clients connect to S2, return error. (not master now.)

Case 7: Case 5 + DB sync processing, S2 failed.
NO SOLUTION, S1 cannot must not accept requests (coz not get sync'ed)

Case 8: Case 5 + S2 failed, and then S1 OK.
NO SOLUTION, S1 doesn't know itself has been failed previously.
***********************************************************************

There are several identical tables stored on both servers, we manipulate
those tables to
simulate transaction(stores the every update delete/insert of requests from
clients)

The database users mainly access the database by the mean of ADO over ODBC
(they are
VB programs), a wrapper VB6 library is written so that those VB programs do
not need
to change their codes much. (Clients are aware of presense of the slave
server
only when the current active master fails, so that it can try to connect to
another.)

Now the key problem is how I can just let it be when active master fails to
connect
to slave(of course it will mark the relative fields of status table, just
don't wanna wait too long.)

I wanna say thanks again here, for reading through this long and clumsy text
by me.

Regards, Frankie

"Joe Conway" <mail@joeconway.com> wrote in message
news:3E4B9003.6060907@joeconway.com...
> Frankie Lam wrote:
> > Now I use only persistent connection inside my PLPGSQL functions, the
> > EXCEPTION "ExecMakerTableFunctionResult: Invalid result from function
> >  returning tuple" has gone.
>
> Good.
>
> > It seems to me this is a matter of libPQ (because connect_timeout
> > doesn't work in the case), but someone told me this is nothing to do
> > with libPQ, and it's possible a matter of KERNEL and transport layer
> > of tcp protocol.(I don't really understand these stuff) Is this
> > true?(If this is true, then I have to abort my project :-( )
> >
>
> I'm afraid the person who gave you that answer knows much better than I.
> Perhaps there is some kernel tcp parameter you can adjust? Or maybe
> dblink just isn't an appropriate solution given your requirements. You
> haven't described what exactly you are trying to do, and what exactly
> are your requirements, so it's a bit hard to help there.
>
> Joe
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 2: you can get off all lists at once with the unregister command
>     (send "unregister YourEmailAddressHere" to majordomo@postgresql.org)




Re: dblink question please

From
Joe Conway
Date:
Frankie Lam wrote:
> Now the key problem is how I can just let it be when active master
> fails to connect to slave(of course it will mark the relative fields
> of status table, just don't wanna wait too long.)
> 

The only thing I can think of is for you to write a C function 
specifically for this purpose using the libpq asynchronous query 
processing functions. See:
http://www.us.postgresql.org/users-lounge/docs/7.3/postgres/libpq-async.html

A few months ago Darko Prenosil sent me his functions to add to dblink 
(and he's been patiently waiting while I'm off busy with other stuff -- 
specifically PL/R -- sorry Darko!). They are basically wrappers around 
various libpq functions. Unfortunately at quick glance it appears he 
didn't implement the asynchronous ones.

Perhaps you could start with his hacked copy of dblink and add the 
asynchronous functions yourself. I do still fully intend to update 
dblink with Darko's functions before 7.4 is released, so if you do the 
asynchronous functions, please send them in.

Darko -- is it OK for me to send your version of dblink to Frankie? Do 
you have an updated copy or perhaps even the asynch functions?

Joe