Thread: Query on Postgresql performance
Hi ,
I am currently using Postgresql for a Research project . I observed some performance results of Postgresql which I would like to discuss .
I have a server which accepts requests from clients. It spawns a new thread for each client. The clients are trying to add entries to a relation in Postgresql database . The server ( a C/C++ program running on Linux ) accesses Postgresql using psqlodbc .
My server spawns a new connection to Postgresql foreach client. My postgresql.conf has the following additional settings . I am also running Postmaster with "-o -F" option .
tcpip_socket = true
max_connections = 100
shared_buffers = 200
vacuum_mem = 16384
max_connections = 100
shared_buffers = 200
vacuum_mem = 16384
My clients are adding strings of length approximately 20 bytes . The database size is 1 Million entries .
I observed the following results :-
1) Effects related to Vaccum :- I performed 10 trials of adding and deleting entries . In each trial , 1 client adds 10,000 entries and then deletes them . During the course of these 10 trials , the Add Rates (rate at which my server can add entries to the Postgresql database ) drops from around 200 Adds/second in the 1st trial to around 100 Adds/second in the 10th trial . But when I do a Vaccuum , Immediately I get back the Add Rates to around 200 Adds/Second .
This effect is more pronounced if there are more than 1 client. As the number of clients increases , the Add Rate drops more steeply requiring me to perform Vaccum more frequently between the trials . So if I draw a graph of the Add Rates in the Y- Axis and the number of Add Operations in the X-axis , I get a saw toothed graph .
2) In the second Experiment , I had a multi threaded client . In the sense , it spawns threads as specified by a command line argument . The server in turn spawns new connections for each Thread of each client ( even the number of client increases) .
I fixed the number of threads per client at 4 . and I increased the number of clients from 1 to 10 . I observed :-
a) As the number of clients are increased , the Add Rate decreases from around 200 Adds/ Second for 1 client to around 130 Adds/Second for 10 clients .
b) suppose I run a trial with 3 clients and 4 threads per client . and I get a Add Rate of 180 Adds/Second the first time .This Add Rate decreases the scond time I repeat the same trial with everything being the same .
During each trial , each thread of each client adds 3000 entries and deletes them and I perform vaccuum after each trial .
Postgresql version :- 7.2.4
Psqlodbc version :- 7.03.0100
I was using Postgresql 7.3.3 earlier but it kept crashing the database after a Vaccum . So I switched to a older and stabler version 7.2.4
Any comments on these observations will be very welcome . Additional details will be provided if needed .
Thanking you in Advance,
Naveen.
On Wed, 2003-09-03 at 15:32, Naveen Palavalli wrote: > shared_buffers = 200 If you're using a relatively modern machine, this is probably on the low side. > 1) Effects related to Vaccum :- I performed 10 trials of adding and > deleting entries . In each trial , 1 client adds 10,000 entries and > then deletes them . During the course of these 10 trials , the Add > Rates (rate at which my server can add entries to the Postgresql > database ) drops from around 200 Adds/second in the 1st trial to > around 100 Adds/second in the 10th trial . But when I do a Vaccuum , > Immediately I get back the Add Rates to around 200 Adds/Second . Well, there's nothing wrong with vacuuming frequently (since it won't block concurrent database operations, and the more often you vacuum, the less time each vacuum takes). > I was using Postgresql 7.3.3 earlier but it kept crashing the database > after a Vaccum . So I switched to a older and stabler version 7.2.4 Can you reproduce the 7.3.3 crash? (BTW, in the future, it would be appreciated if you could report these kinds of bugs to the dev team). -Neil
On Wed, Sep 03, 2003 at 12:32:42PM -0700, Naveen Palavalli wrote: > My server spawns a new connection to Postgresql foreach client. My I don't think you want to do that. You should use a pool. Back end startup is mighty expensive. > 1) Effects related to Vaccum :- I performed 10 trials of adding and > deleting entries . In each trial , 1 client adds 10,000 entries > and then deletes them . During the course of these 10 trials , You'll want to vacuum after every set of deletes, I should think. If you're woking in more than one transaction for the deletes, then fairly frequent vacuums of that table will be effective. > I was using Postgresql 7.3.3 earlier but it kept crashing the > database after a Vaccum . So I switched to a older and stabler > version 7.2.4 You don't want to use 7.3.3. It has a rare but serious bug and was replaced in something like 24 hours with 7.3.4. The 7.2 branch is no longer being maintained, so you really probably should use the 7.3 branch. I'm unaware of others having stability problems with 7.3.4, so if you see them, you should find your core dump and talk to the people on -hackers. A -- ---- Andrew Sullivan 204-4141 Yonge Street Liberty RMS Toronto, Ontario Canada <andrew@libertyrms.info> M2P 2A8 +1 416 646 3304 x110