Re: New FSM patch - Mailing list pgsql-hackers

From Heikki Linnakangas
Subject Re: New FSM patch
Date
Msg-id 48CA392A.9000206@enterprisedb.com
Whole thread Raw
In response to Re: New FSM patch  (Heikki Linnakangas <heikki.linnakangas@enterprisedb.com>)
Responses Re: New FSM patch  (Zdenek Kotala <Zdenek.Kotala@Sun.COM>)
Re: New FSM patch  (Tom Lane <tgl@sss.pgh.pa.us>)
Re: New FSM patch  (Zdenek Kotala <Zdenek.Kotala@Sun.COM>)
List pgsql-hackers
Heikki Linnakangas wrote:
> I've also been working on a low level benchmark using a C user-defined 
> function that exercises just the FSM, showing the very raw CPU 
> performance vs. current implementation. More on that later, but ATM it 
> looks like the new implementation can be faster or slower than the 
> current one, depending on the table size.

Let me describe this test case first:
- The test program calls RecordAndGetPageWithFreeSpace in a tight loop, 
with random values. There's no activity to the heap. In normal usage, 
the time spent in RecordAndGetWithFreeSpace is minuscule compared to the 
heap and index updates that cause RecordAndGetWithFreeSpace to be called.
- WAL was placed on a RAM drive. This is of course not how people set up 
their database servers, but the point of this test was to measure CPU 
speed and scalability. The impact of writing extra WAL is significant 
and needs to be taken into account, but that's a separate test and 
discussion, and needs to be considered in comparison to the WAL written 
by heap and index updates.

That said, the test results are pretty interesting.

I ran the test using a custom scripts with pgbench. I ran it with 
different table sizes, and with 1 or 2 clients, on CVS HEAD and a 
patched version. The unit is "thousands of RecordAndGetPageWithFreeSpace 
calls per second":

Table size    Patched            CVS HEAD    1 clnt    2 clnts    1 clnt    2 clients
8 kB        4.59    3.45    62.83    26.85
336 kB        13.85    6.43    41.8    16.55
3336 kB        14.96    6.3    22.45    10.55
33336 kB    14.85    6.56    5.44    4.08
333336 kB    14.48    11.04    0.79    0.74
3333336 kB    12.68    11.5    0.07    0.07
33333336 kB    7.67    5.37    0.05    0.05

The big surprise to me was that performance on CVS HEAD tanks as the 
table size increases. One possible explanation is that searches for X 
bytes of free space, for a very high X, will not find any matches, and 
the current FSM implementation ends up scanning through the whole FSM 
list for that relation.

Another surprise was how badly both implementations scale. On CVS HEAD, 
I expected the performance to be roughly the same with 1 and 2 clients, 
because all access to the FSM is serialized on the FreeSpaceLock. But 
adding the 2nd client not only didn't help, but it actually made the 
performance much worse than with a single client. Context switching or 
cache line contention, perhaps? The new FSM implementation shows the 
same effect, which was an even bigger surprise. At table sizes > 32 MB, 
the FSM no longer fits on a single FSM page, so I expected almost a 
linear speed up with bigger table sizes from using multiple clients. 
That's not happening, and I don't know why. Although, going from 33MB to 
333 MB, the performance with 2 clients almost doubles, but it still 
doesn't exceed that with 1 client.

Going from 3 GB to 33 GB, the performance of the new implementation 
drops. I don't know why, I think I'll run some more tests with big table 
sizes to investigate that a bit more. The performance in the old 
implementation stays almost the same at that point; I believe that's 
because max_fsm_pages is exceeded at that point.

All in all, this isn't a very realistic test case, but it's interesting 
nevertheless. I'm happy with the performance of the new FSM on this 
test, as it's in the same ballpark as the old one, even though it's not 
quite what I expected.

--   Heikki Linnakangas  EnterpriseDB   http://www.enterprisedb.com


pgsql-hackers by date:

Previous
From: Hannu Krosing
Date:
Subject: Re: Transaction Snapshots and Hot Standby
Next
From: KaiGai Kohei
Date:
Subject: Re: Commitfest patches mostly assigned ... status