Shadkam Islam wrote:
> Hi All,
>
> We are having a table whose data we need to bucketize and show. This is
> a continuously growing table (archival is a way to trim it to size).
> We are facing 2 issues here:
>
> 1. When the records in the table are in the range of 10K, it works fine
> for some time after starting postgres server. But as time passes, the
> entire machine becomes slower and slower - to the extent that we need to
> go for a restart. Though taskmgr does not show any process consuming
> extra-ordinary amount of CPU / Memory. After a restart of postgres
> server, things come back to normal. What may be going wrong here?
Do you have any connections sat "idle in transaction"?
Are you happy that vacuuming is happening?
Are you happy that the configuration values are sensible for your hardware?
> 2. When the records cross 200K, the queries (even "select count(*) from
> _TABLE_") start taking minutes, and sometimes does not return back at
> all. We were previously using MySql and at least this query used to work
> OK there. [Our queries are of the form "select sum(col1), sum(col2),
> count(col3) ... where .... group by ... " ]. Any suggestions ...
Well, "SELECT count(*) FROM TABLE" *is* slow in PG, because it needs to
check visibility of each row and hence scan the table. Shouldn't be
minutes though, not unless you've turned vacuuming off. A table of
200,000 rows isn't particularly large.
Can you give an example of a particular query that's too slow and the
EXPLAIN ANALYSE to go with it? Oh, and the schema and sizes for the
tables involved if possible.
> Below is the tuning parameter changes thet we did with the help from
> internet:
Just "the internet" in general, or any particular pages?
> We are starting postgres with the options [-o "-B 4096"], later we added
>
> a "-S 1024" as well - without any visible improvement.
> Machine has 1GB RAM.
Why on earth are you fiddling with PG's command-line options? You can
set all of this stuff in the postgresql.conf file, and I recommend you
do so.
So that's 8k*4096 or 32MB of shared buffers and 1MB of sort memory. If
your queries are doing lots of sorting and sum()ing then that's probably
not enough.
You might want to try issuing "SET work_mem=..." for various values
before each query and see if there's a good value for your workload.
--
Richard Huxton
Archonet Ltd