Re: Alternatives to very large tables with many performance-killing indicies? - Mailing list pgsql-general

From Jasen Betts
Subject Re: Alternatives to very large tables with many performance-killing indicies?
Date
Msg-id k0nf8k$5vv$1@reversiblemaps.ath.cx
Whole thread Raw
In response to Alternatives to very large tables with many performance-killing indicies?  (Wells Oliver <wellsoliver@gmail.com>)
List pgsql-general
On 2012-08-16, Wells Oliver <wellsoliver@gmail.com> wrote:
> --0023543336c685451c04c7683ffb
> Content-Type: text/plain; charset=ISO-8859-1
>
> Hey folks, a question. We have a table that's getting large (6 million rows
> right now, but hey, no end in sight). It's wide-ish, too, 98 columns.
>
> The problem is that each of these columns needs to be searchable quickly at
> an application level, and I'm far too responsible an individual to put 98
> indexes on a table. Wondering what you folks have come across in terms of
> creative solutions that might be native to postgres. I can build something
> that indexes the data and caches it and runs separately from PG, but I
> wanted to exhaust all native options first.

get rid of some of the columns ?



--
⚂⚃ 100% natural

pgsql-general by date:

Previous
From: Jeff Janes
Date:
Subject: Re: Schemas vs partitioning vs multiple databases for archiving
Next
From: Gavin Flower
Date:
Subject: Re: Messy data models (Re: Visualize database schema)