Postgres scalability and performance on windows - Mailing list pgsql-general

From Gopal
Subject Postgres scalability and performance on windows
Date
Msg-id A5DE6132B8D812419321747E42710E2204C05E@EXCHANGE01.gm.local
Whole thread Raw
In response to Re: COPY FROM : out of memory  (Terry Fielder <terry@ashtonwoodshomes.com>)
Responses Re: Postgres scalability and performance on windows
Re: Postgres scalability and performance on windows
Re: Postgres scalability and performance on windows
List pgsql-general

Hi all,

 

I have a postgres installation thats running under 70-80% CPU usage while

an MSSQL7 installation did 'roughly' the same thing with 1-2% CPU load.

 

Here’s the scenario,

300 queries/second

Server: Postgres 8.1.4 on win2k server

CPU: Dual Xeon 3.6 Ghz,

Memory: 4GB RAM

Disks: 3 x 36gb , 15K RPM SCSI

C# based web application calling postgres functions using npgsql 0.7.

Its almost completely read-only db apart from fortnightly updates.

 

Table 1 - About 300,000 rows with simple rectangles

Table 2 – 1 million rows

Total size: 300MB

 

Functions : Simple coordinate reprojection and intersection query + inner join of table1 and table2.

I think I have all the right indexes defined and indeed the performance for  queries under low loads is fast.

 

 

==================================================================================

postgresql.conf has following settings

max_connections = 150

hared_buffers = 20000                            # min 16 or max_connections*2, 8KB each

temp_buffers = 2000                               # min 100, 8KB each

max_prepared_transactions = 25             # can be 0 or more

# note: increasing max_prepared_transactions costs ~600 bytes of shared memory

# per transaction slot, plus lock space (see max_locks_per_transaction).

work_mem = 512                                   # min 64, size in KB

#maintenance_work_mem = 16384                      # min 1024, size in KB

max_stack_depth = 2048

effective_cache_size = 82728                  # typically 8KB each

random_page_cost = 4                           # units are one sequential page fetch

==================================================================================

 

SQL server caches all the data in memory which is making it faster(uses about 1.2GB memory- which is fine).

But postgres has everything spread across 10-15 processes, with each process using about 10-30MB, not nearly enough to cache all the data and ends up doing a lot of disk reads.

I've read that postgres depends on OS to cache the files, I wonder if this is not happenning on windows.

 

In any case I cannot believe that having 15-20 processes running on windows helps. Why not spwan of threads instead of processes, which might

be far less expensive and more efficient. Is there any way of doing this?

 

My question is, should I just accept the performance I am getting as the limit on windows or should I be looking at some other params that I might have missed?

 

Thanks,

Gopal

pgsql-general by date:

Previous
From: jcradock@me3.com
Date:
Subject: Re: pg_dump
Next
From: Peter Wilson
Date:
Subject: Re: pg_dump