Please allow me to pick out this thread again.
> > True, and in fact most of the performance problem in the client-side
> > MULTIBYTE code comes from the fact that it's not designed-in, but tries
> > to be a minimally intrusive patch. I think we could make it go faster
> > if we accepted that it was standard functionality. So I'm not averse to
> > going in that direction in the long term ...
I have checked the performance problem.
(Environment)- Hardware : P200pro CPU, 128MB, 5400rpm disk- OS : Red hat Linux-5.2- Database
version: postgresql-7.0RC1
(Tested software and data)- Library : libpq- Program : ecpg application program, psql- SQL
: insert, select- Number of tuples : 100,000 tuples
(Test case) (1) non-MULTIBYTE (2) MULTIBYTE encoding=SQL_ASCII
An ecpg program and the psql were used in this test case.
(Result) As for the result, there was no difference in the speed of (1)
and (2). I could *not* find the performance problem.
(Improvement) However, the performance problem may occur if the test of
10,000,000 tuples will be done. Because PQmblen() has a little
overhead of routine-call. Therefore, if the MULTIBYTE PQmblen()
will be changed as the following, the perfomance problem disappers
*perfectly*.
# ifdef MULTIBYTE int PQmblen(const unsigned char *s, int encoding){ if( encoding == SQL_ASCII ) return 1;
<=======Added line return (pg_encoding_mblen(encoding, s)); } # endif
(Conclusion) A client library/application should be made by "configure
--enable-multibyte[=SQL_ASCII]" when postgresql is made by
"configure [non-MULTIBYTE]".
(Reference of library size) non-MULTIBYTE MULTIBYTE
libpq.a 69KB 91KB
libpq.so.2.0 52KB 52KB
libpq.so.2.1 60KB 78KB
--
Regard,
SAKAIDA Masaaki -- Osaka, Japan