Zoolin Lin wrote:
> thanks for your reply
>
>> Primary table is all integers like:
>>
>> date id | num1 | num2 | num3 | num4 | num5 | num6 | num7 | num 8
>> -------------------------------------------------------------------------------------------------
>> primary key is on date to num->6 columns
>
>>> What types are num1->8?
>
> They are all integer
Hmm - not sure if you'd get any better packing if you could make some
int2 and put them next to each other. Need to test.
>> date_id | date w timestamp ----------------------------------------
>> 1 | 2007-2-15 Midnight 2 | 2007-2-15 1 am 3 |
>> 2007-2-15 2 am etc for 24 hours each day
>
>>> If you only want things accurate to an hour, you could lost the
>>> join and just store it as an int: 2007021500, 2007021501 etc.
>
> Hmm yeh I could, I think with the amount of data in the db though it
> behooves me to use one of the date types, even if via lookup table.
You can always create it as a custom ZLDate type. All it really needs to
be is an int with a few casts.
> So I guess I'm just not sure if I'm really gaining anything by using
> an integer date id column and doing a join on a date lookup table,
> vs just making it a date w. timestamp column and having duplicate
> dates in that column.
>
> I would imagine internally that the date w. timestamp is stored as
> perhaps a time_t type plus some timezone information. I don't know
> if it takes that much more space, or there's a significant
> performance penalty in using it
It's a double or int64 I believe, so allow 8 bytes instead of 4 for your
int.
> 2,500 rows per hour, with duplicate date columns, seems like it could
> add up though.
Well, let's see 2500*24*365 = 21,900,000 * 4 bytes extra = 83MB
additional storage over a year. Not sure it's worth worrying about.
--
Richard Huxton
Archonet Ltd