I currently have a client with a database that must hold 125 million records and all tallied about 250 fields.
The database has been normalized and indexed appropriately.
If any of you have worked with MySQL, you will have discovered that indexing is very limited. You can only have one index file per table. The indexing process actuallly creates a full copy of the original table and once you get above 2 indexes with 125million records, it is extremely slow.
Should I even bother trying PostgreSQL to resolve this issue?
We can generate the same indexes in MS SQL and Oracle in a fraction of the amount of time when held up to MySQL.
Thanks
Chris.