Hi,
I'm running a web application using Zope that obtains all data
from a PostgreSQL 7.4 database (Debian Sarge system with package
7.4.7-6sarge4 on an "older" Sparc machine, equipped with 2GB
memory and two processors E250 server). Once I did some performance
tuning and found out that
max_connections = 256
shared_buffers = 131072
sort_mem = 65536
would help for a certain application (that is now not running any
more on this machine, but I left these parameters in
/etc/postgresql/postgresql.conf untouched.
My web application was running fine for years without any problem
and the performance was satisfying. Some months ago I added a
table containing 4500000 data rows (all other used tables are
smaller by order of magnitudes) so nothing very large and this
table is not directly accessed in the web application (just some
genereated caching tables updated once a day. Some functions
and small tables were added as well, but there was a stable
core over several years.
Since about two weeks the application became *drastically* slower
and I urgently have to bring back the old performance. As I said
I'm talking about functions accessing tables that did not increased
over several years and should behave more or less the same.
I wonder whether adding tables and functions could have an influence
on other untouched parts and how to find out what makes the things
slow that worked for years reliable and satisfying. My first try
was to switch back to the default settings of the current Debian
package maintainers /etc/postgresql/postgresql.conf leaving the
parameters above untouched but this did not changed anything.
I'm quite clueless even how to explain the problem correctly and
I'm hoping you will at least find information enouth to ask me
"the right questions" to find out the information you need to
track down the performance problems.
Kind regards and thanks for any help
Andreas.