I've once proposed a patch for 64bit transaction ID, but this causes
some overhead to each tuple (XMIN and XMAX). Pgbench with 64bit
transaction ID has to pay about a couple of percent of performance. If
64bit transaction ID is a reasonable fix, I've already posted this
patch. Anyone can apply this to later versions.
Mark Woodward wrote:
>> Mark Woodward wrote:
>>> OK, here's my problem, I have a nature study where we have about 10
>>> video
>>> cameras taking 15 frames per second.
>>> For each frame we make a few transactions on a PostgreSQL database.
>> Maybe if you grouped multiple operations on bigger transactions, the I/O
>> savings could be enough to buy you the ability to vacuum once in a
>> while. Or consider buffering somehow -- save the data elsewhere, and
>> have some sort of daemon to put it into the database. This would allow
>> to cope with the I/O increase during vacuum.
>
> The problem is ssufficiently large that any minor modification can easily
> hide the problem for a predictble amount of time. My hope was that someone
> would have a real "long term" work around.
>
> ---------------------------(end of broadcast)---------------------------
> TIP 9: In versions below 8.0, the planner will ignore your desire to
> choose an index scan if your joining column's datatypes do not
> match
>
--
Koichi Suzuki