Robert Haas wrote:
> On Sun, May 22, 2011 at 1:38 PM, Joshua Berkus <josh@agliodbs.com> wrote:
> >> Another point is that parsing overhead is quite obviously not the
> >> reason for the massive performance gap between one core running simple
> >> selects on PostgreSQL and one core running simple selects on MySQL.
> >> Even if I had (further) eviscerated the parser to cover only the
> >> syntax those queries actually use, it wasn't going to buy more than a
> >> couple points.
> >
> > I don't know if you say Jignesh's presentation, but there seems to be a lot of reason to believe that we are
lock-boundon large numbers of concurrent read-only queries.
>
> I didn't see Jignesh's presentation, but I'd come to the same
> conclusion (with some help from Jeff Janes and others):
>
> http://archives.postgresql.org/pgsql-hackers/2010-11/msg01643.php
> http://archives.postgresql.org/pgsql-hackers/2010-11/msg01665.php
>
> We did also recently discuss how we might improve the behavior in this case:
>
> http://archives.postgresql.org/pgsql-hackers/2011-05/msg00787.php
>
> ...and ensuing discussion.
>
> However, in this case, there was only one client, so that's not the
> problem. I don't really see how to get a big win here. If we want to
> be 4x faster, we'd need to cut time per query by 75%. That might
> require 75 different optimizations averaging 1% a piece, most likely
> none of them trivial. I do confess I'm a bit confused as to why
> prepared statements help so much. That is increasing the throughput
> by 80%, which is equivalent to decreasing time per query by 45%. That
> is a surprisingly big number, and I'd like to better understand where
> all that time is going.
Prepared statements are pre-parsed/rewritten/planned, but I can't see
how decreasing the parser size would affect those other stages, and
certainly not 45%.
-- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB
http://enterprisedb.com
+ It's impossible for everything to be true. +