Re: [GENERAL] 50 MB Table - Mailing list pgsql-general

From Paul Condon
Subject Re: [GENERAL] 50 MB Table
Date
Msg-id 38C45283.570BAF60@quiknet.com
Whole thread Raw
In response to 50 MB Table  (JB <jimbag@kw.igs.net>)
List pgsql-general
JB wrote:

> I have a 50 MB +- table in postgres. The data is normalized so there's
> not much I can do about the size. The tuples are about 512 bytes so
> there's a pile of 'em. I need searching on of several fields, a couple
> in particular are text fields that needs 'LIKE'. The problem is, the
> thing is way too slow. So, I was wondering, before I go hunting for some
> other solution, could anyone here point me to some ways to (hand)
> optimize the searching in postgres? Different indexes, hashing and LIKE?
> I'm not sure where to go with this.
>
> The basic criteria are:
>  - sizes of indexes, etc, is not an issue. There's lot's of room on the
> box.
>  - the data is basically static so a read-only (if such a thing) is
> fine.
>  - it needs to be FAST
>
> cheers
> jb
>
> ************

It sounds as if you have several different kinds of information encoded in
a single column using special words or letter combinations. This is a
violation of the ideal that data items should be "atomic." You should make
a catalog of all the things that you want to be able to say about each
tuple, and design a relational schema in which atomic assertion is given
its own column (attribute). Then you will be able to create indices on
each, and you won't have to use LIKE in your WHERE clauses.

Paul


pgsql-general by date:

Previous
From: Ron Atkins
Date:
Subject: Re: [GENERAL] DHCP and pg_hba.conf
Next
From: JB
Date:
Subject: Re: [GENERAL] 50 MB Table