Hi,<br /><br />I am checking a performance problem encountered after porting old embeded DB to postgreSQL. While the
systemis real-time sensitive, we are concerning for per-query cost. In our environment sequential scanning (select *
from...) for a table with tens of thousands of record costs 1 - 2 seconds, regardless of using ODBC driver or the
"timing"result shown in psql client (which in turn, relies on libpq). However, using EXPLAIN ANALYZE, or checking the
statisticsin pg_stat_statement view, the query costs only less than 100ms.<br /><br />So, is it client interface (ODBC,
libpq)'s cost mainly due to TCP? Has the pg_stat_statement or EXPLAIN ANALYZE included the cost of copying tuples from
sharedbuffers to result sets?<br /><br />Could you experts share your views on this big gap? And any suggestions to
optimise?<br/><br />P.S. In our original embeded DB a "fastpath" interface is provided to read directly from shared
memoryfor the records, thus provides extremely realtime access (of course sacrifice some other features such as
consistency).<br/><br />Best regards,<br />Han<br />