I wrote some data transformation script at work, and after seeing "with count -2017657667" (and similar) in my scripts log I got a bit worried. seeing else where were folks just run a full on count(*) later to check counts but that is even MORE time and I was thinking it was a psycopg2 problem, but seems there are issues with the internal counters in pg as well for tracking "large" changes.
Vik Fearing <vik.fearing@dalibo.com> writes: > Without re-doing the work, my IRC logs show that I was bothered by this > in src/backend/tcop/postgres.c:
> max_rows = pq_getmsgint(&input_message, 4); > I needed to change max_rows to int64 which meant I had to change > pq_getmsgint to pq_getmsgint64 which made me a little worried.
As well you should be, because we are *not* doing that. That would be a guaranteed-incompatible protocol change. Fortunately, I don't see any functional need for widening the row-limit field in execute messages; how likely is it that someone wants to fetch exactly 3 billion rows? The practical use-cases for nonzero row limits generally involve fetching a bufferload worth of data at a time, so that the restriction to getting no more than INT_MAX rows at once is several orders of magnitude away from being a problem.
The same goes for internal uses of row limits, which makes it questionable whether it's worth changing the width of ExecutorRun's count parameter, which is what I assume you were on about here. But in any case, if we did that we'd not try to reflect it as far as here, because the message format specs can't change.