On Tue, 30 Mar 2004, Diogo Biazus wrote:
> Hi folks,
>
> I have a database using tsearch2 to index 300 000 documents.
> I've already have optimized the queries, and the database is vacuumed on
> a daily basis.
> The stat function tells me that my index has aprox. 460 000 unique words
> (I'm using stemmer and a nice stopword list).
460 000 unique words is a lot ! Have you seen on them ? Sometimes it's
very useful to analyze what did you indexed and do you want all of them.
I suggest you to use ispell dictionary and, if you index numbers
(look statistics), use special dictionaries for integer and decimal numbers
http://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/dicts/README.intdict
> The problem is performance, some queries take more than 10 seconds to
> execute, and I'm not sure if my bottleneck is memory or io.
> The server is a Athlon XP 2000, HD ATA133, 1.5 GB RAM running postgresql
> 7.4.3 over freebsd 5.0 with lots of shared buffers and sort_mem...
>
> Does anyone has an idea of a more cost eficient solution?
> How to get a better performance without having to invest some
> astronomicaly high amount of money?
>
> TIA,
>
>
Regards,
Oleg
_____________________________________________________________
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83