Re: PATCH: pgbench - merging transaction logs - Mailing list pgsql-hackers

From Tomas Vondra
Subject Re: PATCH: pgbench - merging transaction logs
Date
Msg-id 5544F8EC.6030908@2ndquadrant.com
Whole thread Raw
In response to Re: PATCH: pgbench - merging transaction logs  (Fabien COELHO <coelho@cri.ensmp.fr>)
Responses Re: PATCH: pgbench - merging transaction logs
List pgsql-hackers
Hi,

On 05/02/15 15:30, Fabien COELHO wrote:
>
> Hello,
>
>>> The counters are updated when the transaction is finished anyway?
>>
>> Yes, but the thread does not know it's time to write the results until
>> it completes the first transaction after the interval ends ...
>>
>> Let's say the very first query in thread #1 takes a minute for some
>> reason, while the other threads process 100 transactions per second. So
>> before the thread #1 can report 0 transactions for the first second, the
>> other threads already have results for the 60 intervals.
>>
>> I think there's no way to make this work except for somehow tracking
>> timestamp of the last submitted results for each thread, and only
>> flushing results older than the minimum of the timestamps. But that's
>> not trivial - it certainly is more complicated than just writing into a
>> shared file descriptor.
>
> I agree that such an approach this would be horrible for a very limited
> value. However I was suggesting that a transaction is counted only when
> it is finished, so the aggregated data is to be understood as refering
> to "finished transactions in the interval", and what is in progress
> would be counted in the next interval anyway.

That only works if every single transaction is immediately written into 
the shared buffer/file, but that would require acquiring a lock shared 
by all the threads. And that's not really free - for cases with many 
clients doing tiny transactions, this might be a big issue, for example.

That's why I suggested that each client uses a shared buffer for the 
results, and only submits the results once the interval is over. 
Submitting the result however happens on the first transaction from the 
next interval. If the transaction is long, the results would not be 
submitted.

It might be done in the other direction, though - the "writer" thread 
might collect current results at the end of the interval.

>> Merging results for each transaction would not have this issue, but
>> it would also use the lock much more frequently, and that seems
>> like a pretty bad idea (especially for the workloads with short
>> transactions that you suggested are bad match for detailed log -
>> this would make the aggregated log bad too).
>>
>> Also notice that with all the threads will try to merge the data
>> (and thus acquire the lock) at almost the same time - this is
>> especially true for very short transactions. I would be surprised
>> if this did not cause issues on read-only workloads with large
>> numbers of threads.
>
> ISTM that the aggregated version should fare better than the
> detailed log, whatever is done: the key performance issue is because
> fprintf is slow, with aggregated log these are infrequent, and only
> arithmetic remains in a critical section.

Probably.

>>>>> (2) The feature would not be available for the thread-emulation with
>>>>> this approach, but I do not see this as a particular issue as I
>>>>> think that it is pretty much only dead code and a maintenance burden.
>>>>
>>>> I'm not willing to investigate that, nor am I willing to implement
>>>> another feature that works only sometimes (I've done that in the past,
>>>> and I find it a bad practice).
>>
>> [...]
>
> After the small discussion I triggered, I've submitted a patch to drop
> thread fork-emulation from pgbench.

OK, good.

>
>> [...]
>> Also, if the lock for the shared buffer is cheaper than the lock
>> required for fprintf, it may still be an improvement.
>
> Yep. "fprintf" does a lot of processing, so it is the main issue.

The question is whether the processing happens while holding the lock, 
though. I don't think that's the case.


--
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



pgsql-hackers by date:

Previous
From: Andrew Dunstan
Date:
Subject: Re: CTE optimization fence on the todo list?
Next
From: Noah Misch
Date:
Subject: Re: parallel mode and parallel contexts