Greg Spiegelberg wrote:
> The data represents metrics at a point in time on a system for
> network, disk, memory, bus, controller, and so-on. Rx, Tx, errors,
> speed, and whatever else can be gathered.
>
> We arrived at this one 642 column table after testing the whole
> process from data gathering, methods of temporarily storing then
> loading to the database. Initially, 37+ tables were in use but
> the one big-un has saved us over 3.4 minutes.
I am sure you changed the desing because those 3.4 minutes were significant to you.
But I suggest you go back to 37 table design and see where bottleneck is.
Probably you can tune a join across 37 tables much better than optimizing a
difference between two 637 column rows.
Besides such a large number of columns will cost heavily in terms of
defragmentation across pages. The wasted space and IO therof could be
significant issue for large number of rows.
642 column is a bad design. Theoretically and from implementation of postgresql
point of view. You did it because of speed problem. Now if we can resolve those
speed problems, perhaps you could go back to other design.
Is it feasible for you right now or you are too much committed to the big table?
And of course, then it is routing postgresql tuning exercise..:-)
Shridhar