>> Are you using the same temp tables for the whole batch or do you generate
a few 100K
>> of them ?
The process re-creates the 10 temp table for each instance of the function
being called.
I.e. this will equate to 500k temp tables for 50k xml files.
The "ON COMMIT DROP" part was added at some stage as an attempt to solve
some performance issues. THe argument was that , since a COMMIT is done
after each of the 50k xml files , the number of temp tables will not build
up and cause any problems.
I can understand the performance issue due to load on the catalog, but I
would not have expected this to have the impact I'm experiencing.
>> It may help to call analyze explicitly on the touched tables
>> a few times during your process. Here a look at the monitoring statistics
>> may give some clue.
>> (http://blog.pgaddict.com/posts/the-two-kinds-of-stats-in-postgresql)
Thanks, I'll try this and see of this makes any difference.
THanks for the input.
Regards
gmb
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/High-memory-usage-performance-issue-temp-tables-tp5815108p5815111.html
Sent from the PostgreSQL - sql mailing list archive at Nabble.com.