Re: Qual evaluation cost estimates for GIN indexes - Mailing list pgsql-hackers

From Robert Haas
Subject Re: Qual evaluation cost estimates for GIN indexes
Date
Msg-id CA+TgmoayLN2vtuUztNTBvB0vdANHy=s7P-P+ROydrRC5FSatQg@mail.gmail.com
Whole thread Raw
In response to Re: Qual evaluation cost estimates for GIN indexes  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: Qual evaluation cost estimates for GIN indexes  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-hackers
On Thu, Feb 16, 2012 at 6:30 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> I wrote:
>> BTW, an entirely different line of thought is "why on earth is @@ so
>> frickin expensive, when it's comparing already-processed tsvectors
>> with only a few entries to an already-processed tsquery with only one
>> entry??".  This test case suggests to me that there's something
>> unnecessarily slow in there, and a bit of micro-optimization effort
>> might be well repaid.
>
> Oh, scratch that: a bit of oprofiling shows that while the tsvectors
> aren't all that long, they are long enough to get compressed, and most
> of the runtime is going into pglz_decompress not @@ itself.  So this
> goes back to the known issue that the planner ought to try to account
> for detoasting costs.

This issue of detoasting costs comes up a lot, specifically in
reference to @@.  I wonder if we shouldn't try to apply some quick and
dirty hack in time for 9.2, like maybe random_page_cost for every row
or every attribute we think will require detoasting.  That's obviously
going to be an underestimate in many if not most cases, but it would
probably still be an improvement over assuming that detoasting is
free.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


pgsql-hackers by date:

Previous
From: Shigeru Hanada
Date:
Subject: Re: pgsql_fdw, FDW for PostgreSQL server
Next
From: Tom Lane
Date:
Subject: Re: Qual evaluation cost estimates for GIN indexes