I wrote a test program in C++ using libpq. It works as follows (pseudo code):
for ( int loop = 0; loop < 1000; ++loop ) {
PQexec("BEGIN");
const char* sql = "INSERT INTO pg_perf_test (id, text) VALUES($1,$2)";
PQprepare(m_conn, "stmtid",sql,0,NULL);
for ( int i = 0; i < 1000; ++i )
// Set values etc.
PQexecPrepared(m_conn,…);
}
PQexec("DEALLOCATE stmtid");
PQexec("COMMIT");
}
I measured the duration of every loop of the outer for-loop resulting in an average of 450 ms (per 1000 data sets insert)
After that, I wrote a test program in Java using JDBC. It works as follows:
for ( int loops = 0; loops < 1000; ++i) {
String sql = "INSERT INTO pq_perf_test (id,text) VALUES (?,?)";
PreparedStatement stmt = con.prepareStatement(sql);
for (int i = 0; i < 1000; ++i ) {
// Set values etc.
stmt.addBatch();
}
stmt.executeBatch();
con.commit();
stmt.close();
}
I measured the duration of every loop of the outer for-loop resulting in an average of 100 ms (per 1000 data sets insert)
This means that accessing PostgreSQL by JDBC is about 4-5 times faster than using libpq.
Comparable results have been measured with analog update and delete statements.
I need to enhance the performance of my C++ code. Is there any possibility in libpq to reach the performance of JDBC for INSERT, UPDATE and DELETE statements (I have no chance to use COPY statements)? I didn't find anything comparable to PreparedStatement.executeBatch() in libpq.
Best regards,
Werner Scholtes