Re: Need advice for handling big data in postgres - Mailing list pgsql-general

From Vincent Veyron
Subject Re: Need advice for handling big data in postgres
Date
Msg-id 20150115144208.82c77ee4f6e03ae843a81687@wanadoo.fr
Whole thread Raw
In response to Need advice for handling big data in postgres  (Tobias Fielitz <tobias@streethawk.com>)
Responses Re: Need advice for handling big data in postgres  (Jim Nasby <Jim.Nasby@BlueTreble.com>)
List pgsql-general
On Wed, 14 Jan 2015 11:42:45 +1100
Tobias Fielitz <tobias@streethawk.com> wrote:
>
> OPTION 1 - PARTITIONING:
> For each query only a few columns are interesting and I could partition
> the table (as it was suggested on SO)
> by *created* and by *code*.
> There is roughly 10 different codes and I would keep data for the last
> two months (partitioned by day). So I would end up having 10 * 60 = 600
> partitions.
> For every partition I could create indexes that are relevant to that
> partition (examples: *created*, *created_on_server* or *latitude* and
> *longitude*).
>
> OPTION 2 - MULTIPLE TABLES:
> I could create the tables myself: one for location log lines, one for
> comment log lines etc. and store them via python in the correct table
> (depending on *code*). Each of these tables would only have the columns
> and indexes needed.
>
> OUTCOME:
> I expect partitioning to be faster because Postgres selects the correct
> partition for me automatically. I can easily get rid of old data by
> dropping the corresponding partition. The downside of the partition
> approach is that all partitions inherit all columns of the master table
> which is unnecessary (and consumes disc space?).
>

I gather from the comments in this list that null fields have a very low overhead; see :

http://www.postgresql.org/message-id/87prx92lj9.fsf@oxford.xeocode.com

I would worry a lot more about the maintenance problems option 2 will induce : if a code value changes or is
added/deleted,your python script needs updating. SQL queries will also be a lot more complicated (union select on
varioustables) and harder to optimize. Maintaining the coherence between the script and the tables will get harder and
harder.


--
                    Salutations, Vincent Veyron

https://libremen.com/
Legal case, contract and insurance claim management software


pgsql-general by date:

Previous
From: Daniel Begin
Date:
Subject: Indexing large table of coordinates with GiST
Next
From: Vick Khera
Date:
Subject: Re: Indexing large table of coordinates with GiST