Large arrays give long lag on server side before command executes - Mailing list pgsql-general
From | David Helgason |
---|---|
Subject | Large arrays give long lag on server side before command executes |
Date | |
Msg-id | 6620FBD8-0E09-11D9-A5DE-000A9566DA8A@uti.is Whole thread Raw |
Responses |
Re: Large arrays give long lag on server side before command executes
Re: Large arrays give long lag on server side before command executes |
List | pgsql-general |
I'm calling one stored procedure with a prepared statement on the server with 6 arrays of around 1200 elements each as parameters. The parameters are around 220K in total. This is taking a surprising amount of time. Thus I put a lot of logging into the application and in the stored procedure that's getting called. It seems that almost all of most of the time is spent before the stored procedure is even entered. What wakes my suspicion that this is not as it should be, is that the time spent before the prepared statements starts is as long or longer as the time spent inside the stored procedure: doing 1200 inserts into a table, a fairly complex outer join with that data, and a loop over around the 1200 resulting records. Enabling lots of logging gives me: 2004-09-22 20:27:29 [5812] LOG: REWRITER STATISTICS DETAIL: ! system usage stats: ! 0.000004 elapsed 0.000000 user 0.000000 system sec ! [11.850198 user 0.215967 sys total] ! 0/0 [0/0] filesystem blocks in/out ! 0/0 [0/3001] page faults/reclaims, 0 [0] swaps ! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent ! 0/0 [132/209] voluntary/involuntary context switches ! buffer usage stats: ! Shared blocks: 0 read, 0 written, buffer hit rate = 0.00% ! Local blocks: 0 read, 0 written, buffer hit rate = 0.00% ! Direct blocks: 0 read, 0 written 2004-09-22 20:27:29 [5812] DEBUG: ProcessUtility 2004-09-22 20:27:29 [5812] DEBUG: CommitTransactionCommand 2004-09-22 20:27:29 [5812] LOG: duration: 1.982 ms 2004-09-22 20:27:30 [5812] DEBUG: StartTransactionCommand 2004-09-22 20:27:35 [5812] NOTICE: Recursive status for begun at 2004-09-22 20:27:35.371659 2004-09-22 20:27:39 [5812] LOG: PARSER STATISTICS DETAIL: ! system usage stats: ! 0.000119 elapsed 0.000000 user 0.000000 system sec ! [20.825833 user 0.340948 sys total] ! 0/0 [0/0] filesystem blocks in/out ! 0/0 [0/3590] page faults/reclaims, 0 [0] swaps ! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent ! 0/0 [273/362] voluntary/involuntary context switches ! buffer usage stats: ! Shared blocks: 0 read, 0 written, buffer hit rate = 0.00% ! Local blocks: 0 read, 0 written, buffer hit rate = 0.00% ! Direct blocks: 0 read, 0 written CONTEXT: SQL function "get_asset_guid_safe" during startup PL/pgSQL function "maint_status_recursive" line 198 at assignment What I find interesting is how five seconds pass from the 20:27:30 debug, where it starts calling my procedure, and the log I print out, at the top of the procedure at 20:27:35. Any suggestions how I go about finding the bottleneck here? What tools do other people use for profiling on Linux. This might sound like a "then don't do that" situation. The data is stuff that will be inserted into a temporary table, so I'd rather not go a different route, but I'm considering switching to a lot of insert statements or a copy in. Regards, d. -- David Helgason, Business Development et al., Over the Edge I/S (http://otee.dk) Direct line +45 2620 0663 Main line +45 3264 5049
pgsql-general by date: