Re: Alternatives to very large tables with many performance-killing indicies? - Mailing list pgsql-general

From Merlin Moncure
Subject Re: Alternatives to very large tables with many performance-killing indicies?
Date
Msg-id CAHyXU0xN1obQtpWgnXkfMHaAjt+6UvvGhMxEPBz58NboTXi0Rg@mail.gmail.com
Whole thread Raw
In response to Alternatives to very large tables with many performance-killing indicies?  (Wells Oliver <wellsoliver@gmail.com>)
List pgsql-general
On Thu, Aug 16, 2012 at 3:54 PM, Wells Oliver <wellsoliver@gmail.com> wrote:
> Hey folks, a question. We have a table that's getting large (6 million rows
> right now, but hey, no end in sight). It's wide-ish, too, 98 columns.
>
> The problem is that each of these columns needs to be searchable quickly at
> an application level, and I'm far too responsible an individual to put 98
> indexes on a table. Wondering what you folks have come across in terms of
> creative solutions that might be native to postgres. I can build something
> that indexes the data and caches it and runs separately from PG, but I
> wanted to exhaust all native options first.

Well, you could explore normalizing your table, particularly if many
of your 98 columns are null most of the time.  Another option would be
to implement hstore for attributes and index with GIN/GIST --
especially if you need to filter on multiple columns.  Organizing big
data for fast searching is a complicated topic and requires
significant thought in terms of optimization.

merlin


pgsql-general by date:

Previous
From: Wells Oliver
Date:
Subject: Alternatives to very large tables with many performance-killing indicies?
Next
From: Tomas Hlavaty
Date:
Subject: Re: success with postgresql on beaglebone