Thread: Serialized Access

Serialized Access

From
"Phillip Mills"
Date:
I'm working on an application that uses EJB3 entities in JBoss, with Hibernate and a PostgreSQL database.  One of the entity tables needs consistent, synchronized updates to rows in an environment where telling the user that their operation failed and starting over is not an option.

Because it's the default, I've used EJB3's optimistic locking with a strategy of catching EJBExceptions and retrying my updates.  Since contention can be frequent, the overhead (and extra logic) for this seems like a waste.  I would like to try pessimistic locking and compare the results, but here's where my problem arises.  EJB documentation passes this off to the application server, which considers it a ORM problem.  Hibernate says it doesn't add any lock features beyond what JDBC and the database provide....

In the end, I need Java code in a stateless bean that causes serialized access to database rows that are under the control of an EntityManager, but the approach to doing that is eluding me.

So, I'm not entirely sure this is a PostgreSQL question, but if not perhaps someone can tell me which layer of this architecture *is* responsible.  :-)

Thanks in advance.

Re: Serialized Access

From
Craig Ringer
Date:
Phillip Mills wrote:
> I'm working on an application that uses EJB3 entities in JBoss, with
> Hibernate and a PostgreSQL database.  One of the entity tables needs
> consistent, synchronized updates to rows in an environment where telling the
> user that their operation failed and starting over is not an option.
>
> Because it's the default, I've used EJB3's optimistic locking with a
> strategy of catching EJBExceptions and retrying my updates.  Since
> contention can be frequent, the overhead (and extra logic) for this seems
> like a waste.  I would like to try pessimistic locking and compare the
> results, but here's where my problem arises.  EJB documentation passes this
> off to the application server, which considers it a ORM problem.  Hibernate
> says it doesn't add any lock features beyond what JDBC and the database
> provide....
>
> In the end, I need Java code in a stateless bean that causes serialized
> access to database rows that are under the control of an EntityManager, but
> the approach to doing that is eluding me.

You might want to look into advisory locking. If your locks don't need
to be longer than the life of an active EntityManager session then you
can probably just issue a native query through the EntityManager to
acquire the lock before doing anything more.

The main thing you need to be aware of is that advisory locks are not
transaction scoped, they're connection scoped. They are held until
explicitly released, or until connection close. You'll have to make sure
to release any locks when you catch persistence exceptions that leave
the connection usable. If the connection is broken you don't need to do
anything special as Pg will release the locks for you.

Advisory locking is specific to PostgreSQL and will not be portable to
other databases.

See:

http://www.postgresql.org/docs/current/static/explicit-locking.html#ADVISORY-LOCKS

I'm going to have a play and see if I can come up with a demo that uses
advisory locking through Hibernate. So far I'm only using it with direct
JDBC calls in some of the trickier bits of the app I'm working on, so I
haven't had cause to combine it with Hibernate yet.

--
Craig Ringer

Re: Serialized Access

From
"Scott Marlowe"
Date:
On Wed, Jun 25, 2008 at 9:21 AM, Phillip Mills <pmills@systemcore.ca> wrote:
> I'm working on an application that uses EJB3 entities in JBoss, with
> Hibernate and a PostgreSQL database.  One of the entity tables needs
> consistent, synchronized updates to rows in an environment where telling the
> user that their operation failed and starting over is not an option.
>
> Because it's the default, I've used EJB3's optimistic locking with a
> strategy of catching EJBExceptions and retrying my updates.  Since
> contention can be frequent, the overhead (and extra logic) for this seems
> like a waste.

Until you benchmark it for your app you really don't know how
inefficient it really is compared to pessimistic locking.

And what extra logic is needed to retry the failed transaction, or do
you mean the logic in the app / jdbc, and not in postgresql.  Because
the logic in pgsql should pretty much be to just run the whole
transaction over again.  Since there's usually only one db but
multiple web servers, having the web servers work a little harder is
scalable, while doing anything that slows down the db will cost
everyone.

But I'm not really sure what exactly you're trying to do, so I don't
really know what to advise on this one.  But definitely use the
serializable transaction level with retry for a benchmark to see how
it scales under your contention conditions. Just to have a baseline to
compare to.

Re: Serialized Access

From
"Phillip Mills"
Date:
On Wed, Jun 25, 2008 at 10:55 PM, Scott Marlowe <scott.marlowe@gmail.com> wrote:
Until you benchmark it for your app you really don't know how
inefficient it really is compared to pessimistic locking.

Sure.  The question was about more about finding the right approach/layer for implementing pessimistic locking so that comparisons *could* be made.

Re: Serialized Access

From
"Phillip Mills"
Date:
On Wed, Jun 25, 2008 at 10:21 PM, Craig Ringer <craig@postnewspapers.com.au> wrote:

You might want to look into advisory locking. If your locks don't need
to be longer than the life of an active EntityManager session then you
can probably just issue a native query through the EntityManager to
acquire the lock before doing anything more.

Thank you very much for this and the link.  (I'm much more an OOP programmer than a DB programmer.)  Too bad about the non-portability, but I suppose it had to be.

I am a little bemused that the only supported Java persistence strategy is to try the operation, and then react to failure by recreating the entire transaction.  At first I assumed I could catch the exception and then re-do the one table operation that had failed...nope, not in a transaction any more.

Re: Serialized Access

From
Craig Ringer
Date:
Phillip Mills wrote:
> On Wed, Jun 25, 2008 at 10:21 PM, Craig Ringer
> <craig@postnewspapers.com.au <mailto:craig@postnewspapers.com.au>> wrote:
>
>
>     You might want to look into advisory locking. If your locks don't need
>     to be longer than the life of an active EntityManager session then you
>     can probably just issue a native query through the EntityManager to
>     acquire the lock before doing anything more.
>
>
> Thank you very much for this and the link.  (I'm much more an OOP
> programmer than a DB programmer.)  Too bad about the non-portability,
> but I suppose it had to be.

Not necessarily. Your other option is to use SELECT ... FOR UPDATE to
explicitly lock the records you are interested in. This should be quite
portable - at least to any database you might actually want to use. I've
had no cause to test that in practice, though.

SELECT ... FOR UPDATE takes locks that are automatically released at
transaction commit or rollback. That's a rather significant advantage to
my mind.

Another option is a table-level lock using LOCK TABLE . You have a
variety of exclusion levels available, so you don't have to lock out
read only transactions if you don't want to. LOCK TABLE probably isn't
very portable at least in the details of its locking options and
behaviour. Like SELECT ... FOR UPDATE, LOCK TABLE locks are released at
transaction commit/rollback.

Whether advisory locking, table-level locking, or SELECT ... FOR UPDATE
is more appropriate depends a lot on the details of your app's needs and
how well each approach works with your tools.

With JPA1, in all three cases that's "not very well" regarding tools
suppport. I think you can convince Hibernate to use SELECT ... FOR
UPDATE for pessimistic locking, so it should theoretically be possible
with Hibernate EntityManager as well so long as you don't mind some
Hibernate specific code. Table level locks are just an easy native query
through the EntityManager, and advisory locks shouldn't be any harder.
You could also issue SELECT ... FOR UPDATE queries on the data of
interest through the native query mechanism if you can't or don't want
to use Hibernate's support.

In all three cases you need to watch Hibernate's session lifetimes very
carefully. With advisory locks you also need to keep a careful eye on
post-transaction cleanup, because if you end up returning a connection
that holds locks to the connection pool after a commit or rollback
you'll be in deadlock central (and a debugging nightmare).

It's hard to say what the best approach is with the limited amount of
information available. Maybe you could describe your problem in a little
more detail?

--
Craig Ringer