Re: When the Session ends in PGSQL? - Mailing list pgsql-general

From Durumdara
Subject Re: When the Session ends in PGSQL?
Date
Msg-id CAEcMXhnhFpadxa5B1H0AGV2D3fD39ORZVPmRHT2sDGRpGXrPhg@mail.gmail.com
Whole thread Raw
In response to Re: When the Session ends in PGSQL?  (Bill Moran <wmoran@potentialtech.com>)
Responses Re: When the Session ends in PGSQL?  (Vincent de Phily <vincent.dephily@mobile-devices.fr>)
List pgsql-general
Dear Bill!

I don't want to raise flame - I only reflect.

As I see we misunderstand ourselves, and I want to correct this
"question", and show my viewpoint.

Preface:

1.) DataBases need to close the resources kept by Sessions.
2.) There is need some "border", or a rule to split - which Session is
considered as "finished".

The FireBird is uses "DummyPacketInterval" to recognize dead sessions,
EDB uses "Session Timeout" value for it.
The EDB can still alive if network connection lost, if before Timeout
the client also send a sign to the server.

To I can use PGSQL also, I need to know the limitations, and how to
handle the connections, how to manage them.

I wondering to PGSQL handle this with living TCP connections, because
this is may out of our control.
If we have a timeout value, we can control, which time we have, and
what time is acceptable for a Session.
For example: some places we have that uses wifi connections are
sometimes broken for just a little period. This is enough to
disconnect, but because of higher "Session Timeout" variable our
DataBase connections still alive without close the applications.

Another thing is sign (packet). We must do something periodically to
keep alive the connection. For example: every 1 minutes we do some
dummy thing one server, like "select date" or etc.

The reflections:

2011/6/30 Bill Moran <wmoran@potentialtech.com>:
> In response to Durumdara <durumdara@gmail.com>:
>
>> Hi!
>>
>> 2011/6/30 Bill Moran <wmoran@potentialtech.com>:
>> > In response to Durumdara <durumdara@gmail.com>:
>> >
>> > Session ends when the connection is terminated.  I.e., a PostgreSQL
>> > session is synonymous with a TCP connection (or domain socket in some
>> > cases).
>>
>> In Windows OS what's this meaning?
>> I know there is a difference between Win's and Lin's TCP handling.
>
> There's differences between TCP handling in different distros of Linux,
> but it doesn't change the simple fact that all OSes will kill dead
> sockets eventually, and when the socket is closed, the PG session ends.

Ok, I understand it. Thanks.
But 2 questions are remaining:
1.) how can I control the OS's TCP/IP to make more timeout?
2.) how can force PG to change some parameter, to make bigger keep
alive timeouts? (tcp_keepalives_*?)

>
>> > Such an option wouldn't make any sense to include, if you ask me.
>>
>> In EDB and FireBird we experienced that Timeout is good thing for
>> avoid the short net problems.
>> For example: many place they are using notebooks, and wifis.
>> If somebody get out the wifi area, lost the connection - but the wifi
>> CAN reactivate it when he/she is go again the needed place. And here
>> the DB service also can reactivate the Session in the background -
>> except the period exhaustion.
>
> All of that can be done with the correct settings at the TCP level as
> well.

Can you show me an example, or some descriptions/links/documents about
it for "RTFM" operation? :-)

>
>> > I
>> > mean, if I open a psql and start a transaction, then get interrupted or
>> > have to spend some time researching something else before I can finish,
>> > the last thing I want is to come back to psql and find that my session
>> > was terminated and my in-progress data changes lost.
>>
>> Maybe in PSQL, but not in client applications where users working, and
>> sometimes they are got some failures, like power, or blue screen, or
>> application dying.
>> They want to continue their's work, but when the locks/trs are
>> reamining they cannot do it.
>
> Incorrect.  If a Windows client bluescreens, the server will eventually
> notice that the socket is dead and clean it up.  If that's taking too
> long, then you need to review the TCP settings on your server.

If pgsql connection is socket based then this is true.

>
> However, it sounds to me like your application is poorly designed.

Not. It is only uses the database supported connection model which is
not same as in PGSQL.

>If
> it's being used via unreliable connections, but requires that database
> locks be held for any length of time, you've got two warring design
> requirements, and one or the other is always going to suffer.

We don't have long perioded locks, transactions, but we had some
situations when the locks remained in unreleased state because of
client freezes.
Then we must waiting for the timeout period.

>
>> > For your concern about dying applications, the OS will tear down the
>> > TCP/IP connection eventually, which will result in PG ending the
>> > session (rolling back and uncommitted transaction), so that approach
>> > doesn't really cause you problems there.
>>
>> Uhhhhhh... This sounds awfully.
>>
>> Is this meaning that I periodically lost my temp tables, locks,
>> transactions because of OS's network handling what is out of my
>> control?
>>
>> It sounds horrible for me. When this thing happens commonly?
>
> You are the first person I've had complain that this is a common
> scenario with database applications.

Always have a "FIRST TIME", and a "FIRST PERSON"... :-)

But this is not common scenario, because many DB-s support reconnect
on TCP connection lost.


>  It sounds like your application
> was not designed properly to take into account the various demands of
> the environment.

More than 150 places we can't have homogenous environment.
So we are very dependent on customer.

>
> Keeping things moving along happily in a situation where disconnects
> are frequent and unpredictable is damn difficult.  If you were to add
> a "session timeout" variable, you'd find that you haven't fixed the
> problem, only caused it to manifest in different ways.

I understand it, but what you can do in "wifi-maniac"
multi-corporation customer where only Timeout can help on keep alive
the DB connection?

>  Additionally,
> that's not a problem that it's in PostgreSQL's best interest to try to
> solve, as it's really closer to the application level (from a requirement
> perspective).

I think it's not. The programs are working with many database objects
which are lost on database disconnection. The only way to construct
them all to restart the application.

>
>> I must sure in my Session still alive, and to keeps my important temp
>> tables, locks, etc in safely place...
>
> If "temp tables" are "important", then you're doing it wrong.

No. It's all is in place.
The applcation starts, and connect, then it is build many temp tables,
and other database objects to hold the important data for the session.
For only this session.
On disconnection these infos lost - so we can only restart the client app.

>Using
> temp tables for any data that is not completely sacrificial is a poor
> design decision that is liable to haunt you in many ways.

I don't know how to do it without temp tables.

I can do only if I move all data to the client which is very slow, and
painful operation.
And do this periodically is very slowing down the software.

>  It destroys
> your ability to use connection pooling, replication, failover, etc in
> addition to the problem you are currently complaining about.

I don't think. Connection pooling must prepared to drop all Session
specific thing, like transaction states, temp tables, etc, and start
"as a new connection".
Without this the pooled sessions can add more and more objects, can
lock other and other resources, and can exhausting the system
resources.
Connection Pool must reset the session.

>
> Locks are the same way, keeping DB locks in place for extended periods
> is almost always a bad idea,

We don't keep them if not possible.
But as you know, the "Devil never sleeps" so you can write buggy code
without "Close the transaction" section and then very fast we can get
problems, if the DB server not is doing housekeeping on Sessions.

> and not really the purpose of RDBMS-
> level locking.  Look into implementing advisory locking at the application
> level.

Yes, advisory locks are good for use mutexes over the transactions.
But what a pity they are not uses user defined strings...

Adv_Lock(What)

Adv_Lock('emp_table_record: 118')

It would be better than two integers.

Thanks for your read, and for your help too:
    dd

pgsql-general by date:

Previous
From: Daron Ryan
Date:
Subject: Re: Select from Java Strings
Next
From: Chris Travers
Date:
Subject: Technical question on PREPARE statements and PLPGSQL cached plans