Hi, all!
Ok, I've banged my head on this, and would like to hear some opinion from the
list (I'm just short of trying it, though I don't have the hardware yet).
We have along running data logging application, where we essentially get data
records at varying speeds from a facility.
The data can be treated as arrays of floats, arrays of strings or binary
dumps, where each element in the arrays represent one value.
Additionally, there is the possibility of having a varying number of
'multisampler' values in one packet, where each value itself is an array.
Note, that in one packet the arrays need not be of same length!
So, this gives me at least 2 tables for the data, one that can handle the
'scalar' data (float, string, binary) and one that needs an array for every
value (floats only).
All data will have two time stamps (facility and receive times), as data may
be sent off-line to the logger.
So, all this boils down to the simple question: Is it better (in terms of
indices) to have a seperate table for the time stamps and join the data to it
via an foreign key id field, or have both timestamps in each data table?
What if I would create more data tables?
Please keep in mind, that I expect to have multi-million records in each
table ;-)
Thx,Joerg
--
Leading SW developer - S.E.A GmbH
Mail: joerg.hessdoerfer@sea-gmbh.com
WWW: http://www.sea-gmbh.com