On Thu, Jan 7, 2010 at 3:14 PM, Alvaro Herrera
<alvherre@commandprompt.com> wrote:
> Lefteris escribió:
>> Yes, I am reading the plan wrong! I thought that each row from the
>> plan reported the total time for the operation but it actually reports
>> the starting and ending point.
>>
>> So we all agree that the problem is on the scans:)
>>
>> So the next question is why changing shared memory buffers will fix
>> that? i only have one session with one connection, do I have like many
>> reader workers or something?
>
> No amount of tinkering is going to change the fact that a seqscan is the
> fastest way to execute these queries. Even if you got it to be all in
> memory, it would still be much slower than the other systems which, I
> gather, are using columnar storage and thus are perfectly suited to this
> problem (unlike Postgres). The talk about "compression ratios" caught
> me by surprise until I realized it was columnar stuff. There's no way
> you can get such high ratios on a regular, row-oriented storage.
>
> --
> Alvaro Herrera http://www.CommandPrompt.com/
> PostgreSQL Replication, Consulting, Custom Development, 24x7 support
>
I am aware of that and I totally agree. I would not expect from a row
store to have the same performance with a column. I was just trying to
double check that all settings are correct because usually you have
difference of seconds and minutes between column-rows, not seconds and
almost an hour (for queries Q2-Q8).
I think what you all said was very helpful and clear! The only part
that I still disagree/don't understand is the shared_buffer option:))
Lefteris