Zoolin Lin wrote:
> Hi,
>
> I have database with a huge amount of data so i'm trying to make it
> as fast as possible and minimize space.
>
> One thing i've done is join on a prepopulated date lookup table to
> prevent a bunch of rows with duplicate date columns. Without this I'd
> have about 2500 rows per hour with the exact same date w. timestamp
> in them.
>
> My question is, with postgres do I really gain anything by this, or
> should I just use the date w. timestamp column on the primary table
> and ditch the join on the date_id table.
>
> Primary table is all integers like:
>
> date id | num1 | num2 | num3 | num4 | num5 | num6 | num7 | num 8
> -------------------------------------------------------------------------------------------------
> primary key is on date to num->6 columns
What types are num1->8?
> date_id lookup table:
>
> This table is prepopulated with the date values that will be used.
>
> date_id | date w timestamp ---------------------------------------- 1
> | 2007-2-15 Midnight 2 | 2007-2-15 1 am 3 | 2007-2-15
> 2 am etc for 24 hours each day
If you only want things accurate to an hour, you could lost the join and
just store it as an int: 2007021500, 2007021501 etc.
That should see you good to year 2100 or so.
--
Richard Huxton
Archonet Ltd