On 2009-09-02, V S P <toreason@fastmail.fm> wrote:
> Hi,
> our application is using Postgres in a rather unusuall way.
> It is used by a GUI application to store several hundred
> thousand 'parameters'. Basically it is used like a big INI
> file.
>
> There are about 50 tables with various parameters.
>
> The application typicall goes like this
>
> select id, child_tb_key_id, <fields with parms> from tb1
>
> then for each selected row above
> select from the child table do a select (like the above)
>
> and so on -- many levels deep
>
> I know that it is not a proper way to use SQL
> Instead we should be selecting many rows at once, joining them/etc
>
> But it is what it is now...
rewriting it as a single query will speed up operation.
> Queries are very fast though, Postgres reports that the
> all the queries for a typical 'load' operation take 0.8 seconds
> -- however overall time that the GUI user perceives is 8 seconds.
> Out of that 8 seconds a big chunk is in the sending of the SQL
> statements/receiving results back -- just network traffic, parsing/etc
0.8s processing 7.2s latency
> There are total about 2400 queries that happen in that period of time
> (just selects)
> I am trying to figure out how can I optimize PG configuration
> to suite such a contrived deployment of Postgres.
> For example, we do not mind PG running on the same machine
> as the Client app (it is connected via Qt Sql Pg plugin (so it uses
> Pg native access library underneath).
>
> Are there any optmization can be done for that?
try using libpq directly.
> Also this is a 'single' client/single connection system
> what optimizations can be done for that?
asynchronous queries if you can know what to ask second before the answer to
the first query
> and finally since most of the queries are very quick index-based
> selects what can be done to optimize the traffic between pg and
> the client?
unix domain sockets
> thank you in advance for
> any recommendations/pointers.
but serously bite the bullet and redo it a a single query that returns
2400 rows instead of 2400 queries that return one row.
or slurp all the tables into dictionaries (hash tables/arrays/whatever) in your
QT application. and write your queries in C++ instead of SQL.