I have found that the current open source/free Full Text Index options out
there at the moment all waste alot of disk space, and are simply not an
option for thousands of large documents to be hosted on a fairly small
workgroup server. I am starting work on my own that will use alot less
space and I think that is the only option at this time (roll your own).
- Andrew
On Tue, Feb 12, 2002 at 12:17:18PM -0800, Chris Gamache wrote:
> Computing the amount of disk space it would take to house the full text index
> on one of the fields in my database:
>
> 36 bytes: each row header (approximate)
> + 53 bytes Varchar(53)
> + 8 bytes: oid field
> + 4 bytes: pointer on page to tuple
> ----------------------------------------
> 101 bytes per row
>
> The data page size in PostgreSQL is 8192 bytes (8 KB), so:
>
> 8192 bytes per page
> ------------------- = 82 rows per database page (rounded up)
> 101 bytes per row
>
> 561062644 data rows
> -------------------- = 6842228 database pages
> 82 rows per page
>
> 6842228 database pages * 8192 bytes per page = 56,051,531,776 bytes (56 GB)
>
>
> Is my math right??? WOW! This thing is HUUUGE!
>
> Table "title_fti"
> Attribute | Type | Modifier
> -----------+-----------------------+----------
> string | character varying(53) |
> id | oid |
>
>
> When I index the oid, and the string,oid I'll get roughly another 75-100MB,
> yes?
>
> Ayn tips/tricks, or am I just going to have to add another 100MB to my server?
>
> CG
>
> __________________________________________________
> Do You Yahoo!?
> Send FREE Valentine eCards with Yahoo! Greetings!
> http://greetings.yahoo.com
>
> ---------------------------(end of broadcast)---------------------------
> TIP 3: if posting/reading through Usenet, please send an appropriate
> subscribe-nomail command to majordomo@postgresql.org so that your
> message can get through to the mailing list cleanly