Thread: INSERT query times
Hi,
i am using libpq library and postgresql 8.4 for my linux application running on ARM with 256 MB. I am just doing:
PQconnectdb();
PQexec(INSERT INTO table1 ....); (0.009661 sec.)
PQexec(INSERT INTO table1 ....); (0.004208 sec.)
PQexec(INSERT INTO table2 ....); (0.007352 sec.)
PQexec(INSERT INTO table2 ....); (0.002533 sec.)
PQexec(INSERT INTO table2 ....); (0.002281 sec.)
PQexec(INSERT INTO table2 ....); (0.002244 sec.)
PQexec(INSERT INTO table3 ....); (0.006903 sec.)
PQexec(INSERT INTO table3 ....); (0.002903 sec.)
PQfinnish();
I check the time for each PQexec with gettimeofday function and I always see that the first INSERT for each table needs longer than the next ones.
this must be something with the parser stage and since i am doing every time the same queries, I would like to know if there is a way to cache this queries in order to speed up the first INSERTs.
Thanks in advance,
Sergio
Hello a) look on COPY statement and COPY API protocol - it can be 100x faster than INSERTS http://www.postgresql.org/docs/8.3/static/libpq-copy.html b) if you can't to use COPY use: * outer transaction - BEGIN, INSERT, INSERT ... COMMIT if this is possible * use a prepared statement http://www.postgresql.org/docs/8.3/static/sql-prepare.html if you cannot to use a outer transaction, and you can to replay a process, if there are some problems, use a asynchronnous commit http://www.postgresql.org/docs/8.3/static/wal-async-commit.html Regards Pavel Stehule 2011/7/7 sergio mayoral <smayoral@gmail.com>: > Hi, > i am using libpq library and postgresql 8.4 for my linux application running > on ARM with 256 MB. I am just doing: > PQconnectdb(); > PQexec(INSERT INTO table1 ....); (0.009661 sec.) > PQexec(INSERT INTO table1 ....); (0.004208 sec.) > PQexec(INSERT INTO table2 ....); (0.007352 sec.) > PQexec(INSERT INTO table2 ....); (0.002533 sec.) > PQexec(INSERT INTO table2 ....); (0.002281 sec.) > PQexec(INSERT INTO table2 ....); (0.002244 sec.) > PQexec(INSERT INTO table3 ....); (0.006903 sec.) > PQexec(INSERT INTO table3 ....); (0.002903 sec.) > PQfinnish(); > I check the time for each PQexec with gettimeofday function and I always see > that the first INSERT for each table needs longer than the next ones. > this must be something with the parser stage and since i am doing every time > the same queries, I would like to know if there is a way to cache this > queries in order to speed up the first INSERTs. > Thanks in advance, > Sergio
sergio mayoral <smayoral@gmail.com> writes: > i am using libpq library and postgresql 8.4 for my linux application running > on ARM with 256 MB. I am just doing: > PQconnectdb(); > PQexec(INSERT INTO table1 ....); (0.009661 sec.) > PQexec(INSERT INTO table1 ....); (0.004208 sec.) > PQexec(INSERT INTO table2 ....); (0.007352 sec.) > PQexec(INSERT INTO table2 ....); (0.002533 sec.) > PQexec(INSERT INTO table2 ....); (0.002281 sec.) > PQexec(INSERT INTO table2 ....); (0.002244 sec.) > PQexec(INSERT INTO table3 ....); (0.006903 sec.) > PQexec(INSERT INTO table3 ....); (0.002903 sec.) > PQfinnish(); > I check the time for each PQexec with gettimeofday function and I always see > that the first INSERT for each table needs longer than the next ones. The first few commands of *any* type on a new connection are going to take longer than repeat versions of those commands, because the backend needs to load up its internal caches. Once it's cached information about the tables, operators, etc that you are working with, it's a bit faster. This isn't specific to INSERT. > this must be something with the parser stage and since i am doing every time > the same queries, I would like to know if there is a way to cache this > queries in order to speed up the first INSERTs. The only "fix" is to hang onto your connections longer. Consider a connection pooler. regards, tom lane