Michael Allman wrote:
> The C version takes an argument specifying the maximum number of xids to
> recover. The Java version does not.
That's at least partly because the C version makes the caller allocate
the array.
> Without this information, the Java
> version not only looks silly but doesn't make a lot of sense either.
It seems ok to me -- it puts the burden for selecting a suitable batch
size on the resource rather than the TM, but that's six of one, half a
dozen of the other. That also means the RM can generate the array at
whatever size is convenient, rather than having to internally buffer if
it retrieves xids in some large block size than the TM selects.
It certainly doesn't make it more or less stateful, so I still don't
understand your original objection.
> For example, how many recovered xids should we return on the first call
> to recover()?
For our JDBC implementation? Set a fetchsize on your query to something
reasonable -- perhaps 500? -- and return up to that many Xids per call
until you hit the end of the resultset, then return empty arrays
thereafter until a new scan starts.
> Anyway, do you think my implementation of the recover() method violates
> the JTA spec?
The code in pgjdbcxa-20050721.jar appears to violate the spec, as you
completely ignore the flags argument. You need to track the recovery
scan state even if you decide to return all Xids in one array, because
subsequent calls shouldn't return those Xids again until a new scan is
started, per the API docs.
-O