Rollo Konig-Brock <rollokb@gmail.com> writes:
> I've been pulling my hair out over this for days now, as I'm trying to
> build a low latency application. Databases should be fast, but I can not
> work out why so much latency is added between the actual database process
> and the application code. For simple queries, that should take less than a
> millisecond, this mystery latency is by far the biggest performance hit.
Well, for sub-millisecond queries, I'm afraid that EXPLAIN's numbers
omit a lot of the overhead that you have to think about for such short
queries. For instance:
* Query parsing (both the grammar and parse analysis). You could get
a handle on how much this is relative to what EXPLAIN knows about by
enabling log_parser_stats, log_planner_stats, and log_executor_stats.
Depending on workload, you *might* be able to ameliorate these costs
by using prepared queries, although that cure can easily be worse
than the disease.
* I/O conversions, notably both formatting of output data and charset
encoding conversions. You can possibly ameliorate these by using
binary output and making sure that the client and server use the same
encoding.
* SSL encryption. This is probably not enabled on a local loopback
connection, but it doesn't hurt to check.
* Backend catalog cache filling. It doesn't pay to make a connection
for just one or a few queries, because a newly started backend
process won't really be up to speed until it's populated its caches
with catalog data that's relevant to your queries. I think most
(not all) of this cost is incurred during parse analysis, which
would help to hide it from EXPLAIN-based investigation.
regards, tom lane