> You shouldn't be opening a new connection just to update a tuple. As long
> as you dont transfer more than one thing at a time between your app and
> the backend, all should be fine.
You're right. What I think is happening is the extra overhead of a statement,
and the fact that the DB is performing separate write operations, are adding
up to too much overhead.
> If your table has indices, you could try wrapping the updates within a
> transaction:
I tried this, and it did shave a bit of time off the operation, but not
very much. I also tried writing the data to a new, unindexed empty table
with the idea that if I could get it in there efficiently, I could
subseqently merge that table into the main table. I also simplified the
data I was posting down to four int4's -- but doing all that, I still didn't
get very good performance out of it.
I'm going to try with a few different DB's, and in the case of postgres I'm
going to upgrade to the latest version and try it on a few different
platforms.
> > For the time being my workaround is to cache all the data in the application
> > and have a separate thread post the updates. I am unhappy about this because
> > it defeats proper locking and makes for really dirty reads.
>
> Thread safety is on the cards for the driver, but it won't be there for
> 6.3 (I'm trying to get as much of the api implemented as possible).
I didn't realize it wasn't there. What I'll do is lock on some other object
so that only one thread can access it. I'll let a lower priority background
thread grab the lock and post my updates while nothing is going on, and I'll
arrange for it to yield frequently.
> As a non JDBC extension to the driver, I'm looking at providing a way to
> use postgresql's copy method for large updates to tables.
That would be great. Since there is no JDBC standard way of doing it, I'd
be glad if there were driver dependent solutions. I would just rewrite that
section for each DB I wanted to support, and breathe a big sigh of relief
that it worked :)
By background thread workaround improves apparent performance, but obviously
it can't take a sustained data feed.
> Also, you say you are using the database to store Persistent data. Is this
> data that could be stores as a Serializable object? If so, you could have
> a few columns in the table hold key data for searching, and an oid to
> point to a Large Object holding the Serialized object. This now works for
> 6.3
I'm still thinking about this one. Right now I have a lot of small objects
but it's possible I could dream up a way to combine them into bigger objects
where this would make more sense. It's possible, but I'd have to give up on
the ability to search on some of them. Still, that might work. (On average
I have objects made up of four integers and a string, sometimes three integers,
sometimes two strings.)
Justin