Re: Advice needed: query performance deteriorates by 2000% within 1 minute - Mailing list pgsql-performance

From Tom Lane
Subject Re: Advice needed: query performance deteriorates by 2000% within 1 minute
Date
Msg-id 1351442.1645719051@sss.pgh.pa.us
Whole thread Raw
In response to Advice needed: query performance deteriorates by 2000% within 1 minute  (Peter Adlersburg <peter.adlersburg@gmail.com>)
Responses Re: Advice needed: query performance deteriorates by 2000% within 1 minute  (Peter Adlersburg <peter.adlersburg@gmail.com>)
List pgsql-performance
Peter Adlersburg <peter.adlersburg@gmail.com> writes:
>  Limit  (cost=0.00..804.97 rows=10 width=22) (actual
> time=23970.845..25588.432 rows=1 loops=1)
>    ->  Seq Scan on "order"  (cost=0.00..3863.86 rows=48 width=22) (actual
> time=23970.843..25588.429 rows=1 loops=1)
>          Filter: (jsonb_to_tsvector('english'::regconfig, content,
> '["all"]'::jsonb) @@ '''1.20709841'''::tsquery)
>          Rows Removed by Filter: 9652
>  Planning Time: 0.430 ms
>  Execution Time: 25588.448 ms

I think the expense here comes from re-executing jsonb_to_tsvector
a lot of times.  By default that's estimated as 100 times more expensive
than a simple function (such as addition), but these results make it
seem like that's an understatement.  You might try something like

alter function jsonb_to_tsvector(regconfig, jsonb, jsonb) cost 1000;

to further discourage the planner from picking this plan shape.

Possibly the cost estimate for ts_match_tq (the function underlying
this variant of @@) needs to be bumped up as well.

(Bear in mind that pg_dump will not propagate such hacks on
system-defined objects, so you'll need a note to reapply
any such changes after dump/reload or pg_upgrade.)

            regards, tom lane



pgsql-performance by date:

Previous
From: Michael Lewis
Date:
Subject: Re: Advice needed: query performance deteriorates by 2000% within 1 minute
Next
From: MichaelDBA
Date:
Subject: Re: Slow plan choice with prepared query