The problem I saw was first highlighted by EAStress runs with PostgreSQL
on Solaris with 120-150 users. I just replicated that via my smaller
internal benchmark that we use here to recreate that problem.
EAStress should be just fine to highlight it.. Just put pg_clog on
O_DIRECT or something so that all IOs go to disk making it easier to
observe.
In the meanwhile I will try to get more information.
Regards,
Jignesh
Tom Lane wrote:
> Gregory Stark <stark@enterprisedb.com> writes:
>
>> Didn't we already go through this? He and Simon were pushing to bump up
>> NUM_CLOG_BUFFERS and you were arguing that the test wasn't representative and
>> some other clog.c would have to be reengineered to scale well to larger
>> values.
>>
>
> AFAIR we never did get any clear explanation of what the test case is.
> I guess it must be write-mostly, else lazy XID assignment would have
> helped this by reducing the rate of XID consumption.
>
> It's still true that I'm leery of a large increase in the number of
> buffers without reengineering slru.c. That code was written on the
> assumption that there were few enough buffers that a linear search
> would be fine. I'd hold still for 16, or maybe even 32, but I dunno
> how much impact that will have for such a test case.
>
> regards, tom lane
>
> ---------------------------(end of broadcast)---------------------------
> TIP 9: In versions below 8.0, the planner will ignore your desire to
> choose an index scan if your joining column's datatypes do not
> match
>