Hello,
I have been trying to get my Postgres database to do faster inserts.
The environment is basically a single user situation.
The part that I would like to speed up is when a User copys a Project.
A Project consists of a number of Rooms(say 60). Each room contains a
number of items.
A project will contain say 20,000 records.
Anyway the copying process gets slower and slower, as more projects are
added to the database.
My statistics(Athlon 1.8Ghz)
----------------
20,000 items Takes on average 0.078seconds/room
385,000 items Takes on average .11seconds/room
690,000 items takes on average .270seconds/room
1,028,000 items Takes on average .475seconds/room
As can be seen the time taken to process each room increases. A commit
occurs when a room has been copied.
The hard drive is not being driven very hard. The hard drive light
only flashes about twice a second when there are a million records in
the database.
I thought that the problem could have been my plpgsql procedure because
I assume the code is interpreted.
However I have just rewriten the code using straight sql(with some temp
fields),
and the times turn out to be almost exactly the same as the plpgsql
version.
The read speed for the Application is fine. The sql planner seems to be
doing a good job. There has been only one problem
that I have found with one huge select, which was fixed by a cross join.
I am running Red hat 8. Some of my conf entries that I have changed
follow
shared_buffers = 3700
effective_cache_size = 4000
sort_mem = 32168
Are the increasing times reasonable?
The times themselves might look slow, but thats because there are a
number of tables involved in a Copy
I can increase the shared buffer sizes above 32M, but would this really
help?
TIA
peter Mcgregor