On 5/21/12 10:41 AM, Tom Lane wrote:
> Josh Berkus <josh@agliodbs.com> writes:
>> Well, if it's not CPU costs, then something else is eating the time,
>> since I'm seeing per-tuple COUNT counts on indexes being 400% more than
>> on heap.
>
> Well, I'm not: as I said, it looks like about 10% here. Perhaps you're
> testing a cassert-enabled build?
Oh, right, now I get you. It's not the per-tuple costs which matter,
it's the per-size costs. Per-tuple costs are fairly similar, right.
> If the index is too big to fit in RAM, you'd be looking at random
> fetches of the index pages in most cases (since logical ordering of the
> index pages is typically different from physical ordering), leading to
> it likely being a lot slower per page than a heapscan. Not sure this
> has anything to do with your test case though, since you said you'd
> sized the index to fit in RAM.
Right. So what I'm trying to figure out is why counting an index which
fits in ram (and I've confirmed via EXPLAIN ( buffers on ) ) is not
being heap-fetched or read from disk would take 25% as long as counting
a table which is 80% on disk.
I'll try comparting on-disk to on-disk speeds, as well as in-memory to
in-memory speeds, and some non-count tests, as well as multicolumn
covering indexes. I just need to generate more complex test cases than
I can get from pgbench first.
--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com