Shohab Abdullah wrote:
>
> Dear,
> We are facing performance tuning problem while using PostgreSQL Database
> over the network on a linux OS.
> Our Database consists of more than 500 tables with an average of 10K
> records per table with an average of 20 users accessing the database
> simultaneously over the network. Each table has indexes and we are
> querying the database using Hibernate.
> The biggest problem is while insertion, updating and fetching of
> records, ie the database performance is very slow. It take a long time
> to respond in the above scenario.
> Please provide me with the tuning of the database. I am attaching my
> *postgresql.conf* file for the reference of our current configuration
Have you changed _anything_ from the defaults? The defaults are set so
PG will run on as many installations as practical. They are not set for
performance - that is specific to your equipment, your data, and how you
need to handle the data. Assuming the record sizes aren't huge, that's
not a very large data set nor number of users.
Look at these for starters:
http://www.varlena.com/GeneralBits/Tidbits/perf.html
http://www.varlena.com/GeneralBits/Tidbits/annotated_conf_e.html
You might try setting the logging parameters to log queries longer than
"x" (where x is in milliseconds - you will have to decide the
appropriate value for "too long") and start looking into those first.
Make sure that you are running "analyze" if it is not being run by
autovacuum.
Use "EXPLAIN <your query>" to see how the query is being planned - as a
first-pass assume that on any reasonably sized table the words
"sequential scan" means "fix this". Note that you may have to cast
variables in a query to match the variable in an index in order for the
planner to figure out that it can use the index.
Read the guidelines then take an educated stab at some settings and see
how they work - other than turning off fsync, there's not much in
postgresql.conf that will put your data at risk.
Cheers,
Steve