Hi all,
I have got to say that my first foray into postgresSQL is becoming a
very madening experience... I am sure it is my own fault for not knowing
very much but it seems that everything I have tried so far to improve
performance has in fact made it a lot worse. Now my program dies after
roughly 300 seconds of processing directories and updates take literally
10 time longer than inserts (which are themselves very slow).
I am sorry for winning... I've been trying to figure this out non
stop for nearly two weeks...
Anyway, I moved my backup program to another dedicated machine (an
AMD Athlon 1.2GHz (1700+) with 512MB RAM and a Seagate Barracuda 7200.7,
2MB buffer ATA/100 IDE drive). As it stands now I have increased shmmax
to 128MB and in the 'postgresql.conf' I dropped max_connections to 10
and upped shared_buffers to 4096.
What is happening now is that the program does an 'ls' (system call)
to get a list of the files and directories starting at the root of a
mounted partition. These are read into an array which perl then
processes one at a time. the 'ls' value is searched for in the database
and if it doesn't exist, the values are inserted. If they do exist, they
are updated (at 1/10th the speed). If the file is in fact a directory
perl jumps into it and again reads in it's contents into another array
and processes the one at a time. It will do this until all files or
directories on the partition have been processed.
My previous question was performance based, now I just need to get
the darn thing working again. Like I said, after ~300 seconds perl dies.
If I disable auto-commit then it dies the first time it runs an insert.
(this is all done on the same table; 'file_dir'). If I add a 'commit'
before each select than a bunch of selects will work (a few dozen) and
then it dies anyway.
Does this sound at all like a common problem? Thanks for reading my
gripe.
Madison
PS - PostgresSQL 7.4 on Fedora Core 2; indexes on the three columns I
search and my SELECT, UPDATE and INSERT calls are prepared.