Shashi Gireddy napisał(a):
> I recently migrated from MySql, The database size in mysql was 1.4GB (It is a static database). It generated a dump
file(.sql) of size 8GB), It took 2days to import the whole thing into postgres. After all the response from postgres is
adisaster. It took 40sec's to run a select count(logrecno) from sf10001; which generated a value 197569. And It took
forever time to display the table. How to optimize the database so that I can expect faster access to data.
>
> each table has 70 colsX197569 rows (static data), like that I have 40 tables, Everything static.
>
> system configuration: p4 2.8ghz 512mb ram os: xp postgres version: 8.0
First of all you should make VACUUM FULL ANALYZE for the all tables
(http://www.postgresql.org/docs/8.0/interactive/sql-vacuum.html) - this
should solve the problem. However you should also think about changing
table structure, because PostgreSQL needs different indexes than MySQL.
A few months ago I had the same problem - but after vacuuming, making
proper indexes everything is working like a charm. Believe me that you
can achieve the same speed - it is only a matter of good db structure
and environment settings
(http://www.postgresql.org/docs/8.0/interactive/runtime.html).
Regards,
ML