Ron St-Pierre wrote:
> We have a web based application with data that is updated daily. The
> biggest bottleneck occurs when we try to update
> one of the tables. This table contains 58,000 rows and 62 columns, and
> EVERY column is indexed. Every column is
> queryable (?) by the users through the web interface so we are
> reluctant to remove the indexes (recreating them would
> be time consuming too). The primary key is an INT and the rest of the
> columns are a mix of NUMERIC, TEXT, and DATEs.
> A typical update is:
> UPDATE dataTable SET field01=44.5, field02=44.5, field03='Bob',
> field04='foo', ... , field60='2004-08-30', field61='2004-08-29'
> WHERE id = 1234;
>
> Also of note is that the update is run about 10 times per day; we get
> blocks of data from 10 different sources, so we pre-process the
> data and then update the table. We also run VACUUM FULL ANALYZE on a
> nightly basis.
It now appears that VACUUM wasn't running properly. A manual VACUUM FULL
ANALYZE VEBOSE told us that
approximately 275000 total pages were needed. I increased the
max_fsm_pages to 300000, VACUUMED, renamed the
database and re-created it from backup, vacuumed numerous times, and the
total fsm_pages needed continued to remain in
the 235000 -> 270000 range. This morning I deleted the original
(renamed) database, and a VACUUM FULL ANALYZE
VEBOSE now says that only about 9400 pages are needed.
One question about redirecting VACUUMs output to file though. When I run:
psql -d imperial -c "vacuum full verbose analyze;" > vac.info
vac.info contains only the following line:
VACUUM
I've been unable to capture the VERBOSE output to file. Any suggestions?
<snip>
>
Also, thanks for everyone's input about my original posting, I am
investigating some of the options mentioned to further increase
performance.
Ron