Richard,
Thank you.
> Try "SELECT count(*) FROM pg_shdepend".
This query returns 3625 and takes 35 seconds to run.
> If it's not a million rows, then the table is bloated. Try (as postgres
> or some other db superuser) "vacuum full pg_shdepend" and a "reindex
> pg_shdepend".
vacuum full verbose pg_shdepend
INFO: vacuuming "pg_catalog.pg_shdepend"
INFO: "pg_shdepend": found 16103561 removable, 3629 nonremovable row
versions in 131425 pages
DETAIL: 0 dead row versions cannot be removed yet.
Nonremovable row versions range from 49 to 49 bytes long.
There were 0 unused item pointers.
Total free space (including removable row versions) is 1009387632 bytes.
131363 pages are or will become empty, including 0 at the end of the table.
131425 pages containing 1009387632 free bytes are potential move
destinations.
CPU 2.12s/1.69u sec elapsed 52.66 sec.
INFO: index "pg_shdepend_depender_index" now contains 3629 row versions in
101794 pages
DETAIL: 16103561 index row versions were removed.
101311 index pages have been deleted, 20000 are currently reusable.
CPU 20.12s/14.52u sec elapsed 220.66 sec.
After 400 seconds of run I got phone calls that server does not respond to
other clients. So I was forced to cancel "vacuum full verbose pg_shdepend
" command.
How to run it so that other users can use database at same time ?
> If it is a million rows, you'll need to find out why. Do you have a lot
> of temporary tables that aren't being dropped or something similar?
Application creates temporary tables in many places. Every sales operation
probably creates some temporary tables.
Should I change something in configuration or change application (Only
single POS application which is used to access this db) or is only solution
to manully run
vacuum full pg_shdepend
reindex pg_shdepend
periodically ?
How to vacuum full pg_shdepend automatically so that other users can work at
same time ?
Hopefully this table size does not affect to query speed.
Andrus.