I recently migrated from MySql, The database size in mysql was 1.4GB
(It is a static database). It generated a dump file (.sql) of size
8GB), It took 2days to import the whole thing into postgres. After all
the response from postgres is a disaster. It took 40sec's to run a
select count(logrecno) from sf10001; which generated a value 197569.
And It took for ever time to display the table. How to optimize the
database so that I can expect faster access to data.
each table has 70 colsX197569 rows (static data), like that I have 40
tables, Everything static.
system configuration: p4 2.8ghz 512mb ram os: xp postgres version: 8.0
thanks a million in advance,
shashi.
--
Shashi Kiran Reddy. Gireddy,
Graduate Assistant,
CBER, University of Alabama.
http://www.cs.ua.edu/shashi
Home: 205-752-5137 Cell: 205-657-1438