Re: High memory usage / performance issue ( temp tables ? ) - Mailing list pgsql-sql

From gmb
Subject Re: High memory usage / performance issue ( temp tables ? )
Date
Msg-id 1408272021571-5815111.post@n5.nabble.com
Whole thread Raw
In response to Re: High memory usage / performance issue ( temp tables ? )  (Marc Mamin <M.Mamin@intershop.de>)
Responses Re: High memory usage / performance issue ( temp tables ? )
List pgsql-sql
>> Are you using the same temp tables for the whole batch or do you generate
a few 100K
>> of them ? 

The process re-creates the 10 temp table for each instance of the function
being called.
I.e. this will equate to 500k temp tables for 50k xml files. 
The "ON COMMIT DROP" part was added at some stage as an attempt to solve
some performance issues. THe argument was that , since a COMMIT is done
after each of the 50k xml files , the number of temp tables will not build
up and cause any problems. 

I can understand the performance issue due to load on the catalog, but I
would not have expected this to have the impact I'm experiencing.

>> It may help to call analyze  explicitly on the touched tables 
>> a few times during your process. Here a look at the monitoring statistics
>> may give some clue. 
>> (http://blog.pgaddict.com/posts/the-two-kinds-of-stats-in-postgresql) 

Thanks, I'll try this and see of this makes any difference.

THanks for the input.

Regards

gmb



--
View this message in context:
http://postgresql.1045698.n5.nabble.com/High-memory-usage-performance-issue-temp-tables-tp5815108p5815111.html
Sent from the PostgreSQL - sql mailing list archive at Nabble.com.



pgsql-sql by date:

Previous
From: Marc Mamin
Date:
Subject: Re: High memory usage / performance issue ( temp tables ? )
Next
From: gmb
Date:
Subject: Re: High memory usage / performance issue ( temp tables ? )