Hello,
Tom Lane a écrit :
>Antoine Bajolet <antoine.bajolet@free.fr> writes:
>
>
>>We are using postgresql in a search engine on an intranet handling
>>throusand of documents.
>>But we ave a big problem when users use more than two search key.
>>
>>
>
>I think you need to increase the statistics targets for your keywords
>table --- the estimates of numbers of matching rows are much too small:
>
>
What value you think i could put into a ALTER TABLE SET STATISTICS
statment ?
Also, the solution given by Simon Riggs works well.
<quote>
Recode your SQL with an IN subselect that retrieves all possible
keywords before it accesses the larger table.
</quote>
But i will try the old ones increasing the statistics parameter and compare performance.
>
>
>> -> Index Scan using keyword_pattern_key on keywords
>>k2 (cost=0.00..3.51 rows=1 width=4) (actual time=0.078..1.887 rows=75
>>loops=1)
>> Index Cond: (((keyword)::text ~>=~
>>'exploitation'::character varying) AND ((keyword)::text ~<~
>>'exploitatioo'::character varying))
>> Filter: ((keyword)::text ~~ 'exploitation%'::text)
>>
>>
>
>A factor-of-75 error is quite likely to mislead the planner into
>choosing a bad join plan.
>
>BTW, have you looked into using a real full-text-search engine (eg,
>tsearch2) instead of rolling your own like this?
>
>
It seems a quite good contrib, but...
The first version of this search engine was developped in 2000...
tsearch2 nor tsearch existed at this time.
Also, there are some developpement works around this search engine
(pertinence algorithm, filtering with users rights, ponderating keywords
with specific rules to each type of document, etc.) and adapting all to
work in the similar way with tsearch2 seems to be a bit heavy.
At the end, each document indexed are quite big and the choosen method
reduces disk storage : 1 Go of text content traduces to ~100 Mo of table
space.
Best Regards,
Antoine Bajolet