I'd say indexing of 2 TB of data would be a very costly even for
standalone solution ( no relational database ).
Ideal solution would be to have tsearch2 for current documents and
standalone solution for archive documents. If these solutions share
common parsers,dictionaries and ranking schemes it would be easy to
combine results from two queries. We have prototype for standalone
solution - it's based on OpenFTS, which is already tsearch2 compatible.
Oleg
On Thu, 9 Sep 2004, Steve Atkins wrote:
> On Thu, Sep 09, 2004 at 07:56:20AM -0500, Vic Cekvenich wrote:
>
> > What would be performance of pgSQL text search vs MySQL vs Lucene (flat
> > file) for a 2 terabyte db?
> > thanks for any comments.
>
> My experience with tsearch2 has been that indexing even moderately
> large chunks of data is too slow to be feasible. Moderately large
> meaning tens of megabytes.
>
> Your milage might well vary, but I wouldn't rely on postgresql full
> text search of that much data being functional, let alone fast enough
> to be useful. Test before making any decisions.
>
> If it's a static or moderately static text corpus you're probably
> better using a traditional FTS system anyway (tsearch2 has two
> advantages - tight integration with pgsql and good support for
> incremental indexing).
>
> Two terabytes is a lot of data. I'd suggest you do some research on
> FTS algorithms rather than just picking one of the off-the-shelf FTS
> systems without understanding what they actually do. "Managing
> Gigabytes" ISBN 1-55860-570-3 covers some approaches.
>
> Cheers,
> Steve
>
> ---------------------------(end of broadcast)---------------------------
> TIP 5: Have you checked our extensive FAQ?
>
> http://www.postgresql.org/docs/faqs/FAQ.html
>
Regards,
Oleg
_____________________________________________________________
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83