> I also wonder why the reported runtime of 5.847 ms is so much different
> to the runtime reported of my scripts (both php and ruby are almost the
> same). What's the best tool to time queries in postgresql? Can this be
> done from pgadmin?
I've seen differences like that. Benchmarking isn't easy. The client
libraries, the particular language bindings you use, the connection... all
that can add overhead that is actually mych larger that what you're trying
to measure.
- On "localhost", some MySQL distros will default to a UNIX Socket, some
Postgres distros will default to a TCP socket, or even SSL, and vice versa.
Needless to say, on a small query like "SELECT * FROM users WHERE
user_id=$1", this makes a lot of difference, since the query time (just a
few tens of microseconds) is actually shorter than the TCP overhead.
Depending on how you connect you can get a 2-3x variation in throughput
with client and server on the same machine, just between TCP and UNIX
socket.
On queries that retrieve lots of data, overheads are also quite different
(especially with SSL...)
- I've seen postgres saturate a 1 GB/s ethernet link between server and
client during benchmark.
- Performance depends a LOT on your language bindings. For instance :
php : PDO is quite a lot slower than pg_query() especially if you use
prepared statements which are used only once,
python : psycopg, pygresql, mysql-python behave quite differently (psycopg
being by far the fastest of the bunch), especially when retrieving lots of
results, and converting those results back to python types...
So, what are you benchmarking exactly ?...