Thanks to all who replied. Thanks for the tip on that last thread
Tom, I don't know how I missed it. I have a hunch that it's not
applicable to me at this time because I'm running a year and a half
old software (included in OpenBSD 3.4), but I will have to check which
version of DBD::Pg was installed.
> The intent of prepared statements is to reduce the overhead of running
> the parser, rewriter and planner multiple times for a statement that is
> executed multiple times. For an INSERT query without any sub-selects
> that is not rewritten by any rules, the cost to parse, rewrite and plan
> the statement is trivial. So I wouldn't expect prepared statements to be
> a big win -- you would gain a lot more from batching multiple inserts
> into a single transaction, and more still from using COPY.
I was thinking something along the same lines, and was considering
using the COPY statement as my next step, but as someone mentioned
then I have to load it into a temporary database and then do some more
internal magic to preserve referential integrity, unique contraints,
etc. For that reason I was hoping to keep it in perl, and it's always
nice to keep everything in a single neat portable package.
Also, I forgot to mention earlier that I tried using transactions to
speed things up, but since I expect to see certain inserts fail I
would need to rework my code so the whole transaction doesn't fail if
one insert goes bad. This is somewhat contrary to the purpose of
transactions so I'm not sure how to accomplish this. I saw roughly a
20% speed improvement by turning autocommit off and only committing at
the end of parsing each file.
I think in the end I need to check the version of my Pg driver and
perhaps upgrade to 7.4 or 8.0 in order to take advantage of the server
side prepared statements. This is only a development box and I'm
doing this mostly as an academic exercise that will someday help me
speed up the production side, so upgrading isn't out of the question.
Matt