I'm migrating tables from Solaris to Linux. Other than Red-Hat moving the
directories a little bit I expected a close match on performance.
It seems that the SUN version (compiled from source) handled 13 million rows
with ( I know this is not efficient ) Select * from tableName;
The Linux box bumped me out of psql with the exact table structure but with
only 4 million rows doing the same select *.
I'm wondering if it is the startup of the system. I had to issue a huge
nohup on the SUN and on linux it has a predefined /etc/rc.d/init.d script.
Linux postgres gurus' - I will have approx. 24 million rows per table
loaded each day. I don't care if it takes 4 hours to complete an sql call
as long as it does complete. I need to keep 90 days of data, so I'm
thinking of using a new (exact duplicate of) table each day for 90 days then
Truncate the oldest and start over.
Is there a better way? And how do I tune the stock RedHat version? I
really don't want to have to load it again from scratch (source).
P.S. I'm going from a 450MHz sparc to a 750MHz compaq with similar amounts
of real and swap memory.
-Mark