David Helgason <david@uti.is> writes:
> I'm calling one stored procedure with a prepared statement on the
> server with 6 arrays of around 1200 elements each as parameters. The
> parameters are around 220K in total.
Exactly how are you fetching, building, or otherwise providing the
arrays? Which PG version is this exactly?
> Any suggestions how I go about finding the bottleneck here? What tools
> do other people use for profiling on Linux.
Rebuild with profiling enabled (make clean; make PROFILE="-pg -DLINUX_PROFILE")
and then use gprof to produce a report from the trace file that the
backend drops when it exits.
If that sounds out of your league, send along a self-contained test case
and I'll be glad to take a look.
> This might sound like a "then don't do that" situation.
I'm betting on some O(N^2) behavior in the array code, but it'll be
difficult to pinpoint without profile results.
regards, tom lane