Hello,
I am using a PostgreSQL database and I recently ran to some
problems.
I have a table of around two hunded thousand entries (each entry is
78 bytes) and a simple (selct * from table) query takes a lot of
time to
complete). Moreover a (select * from table where column = (select
oid from another_table)) takes several tens of minutes. An index is
already used for `column'.
The `another_table' has something like 200 entries, while column
takes its values from the OIDs of `another_table'.
The server where the database is installed is a sun4u sparc,
UltraAX-i2 running sunOS 5.8.
Could you please tell me if there is any way to optimase queries on
such big tables?
At some later instance the table will reach million of entries. But
with this
high performance penalty, it would be useless! The table is updated
regularly
and cleaned (every entry of table is removed) on a daily basis.
Thank you for any answer,
Ioannis