Thread: Show max_identifier_length causes heavy query execution

Show max_identifier_length causes heavy query execution

From
Moreno D.
Date:
Hello all,

I noticed that after executing a simple query that takes few milliseconds, the odbc driver executes another query which takes about 30 seconds to execute, because it scans a large table.
This is the part of the log file I think is showing the problem, see below for the entire log file:

[15.534]conn=02E72C78, query='select count(*) from az001.doc101 where (code,riga)=(1,1)
'
[15.552]    [ fetched 1 rows ]
[15.608]conn=02E72C78, query='show max_identifier_length'
[15.619]    [ fetched 1 rows ]
[15.660]STATEMENT ERROR: func=PGAPI_ColAttributes, desc='', errnum=30, errmsg='this request may be for MS SQL Server'

Then the driver executes the heavy query "SELECT * FROM az001.doc101", which retrieves about 1.5 million rows.

I came up with a workaround to avoid this problem, executing the query
SELECT (select count(*) from az001.doc101 where (code,riga)=(1,1))
instead of the original one, but this problem seems to appear in many points of the application, so i would like to find a better solution.

Here are some additional information,
the table structure is

CREATE TABLE az001.doc101
(
  code integer NOT NULL,
  riga integer NOT NULL,
  subriga integer NOT NULL,
  t_prog character varying(6) NOT NULL,
  fd_a character varying,
  fd_nd double precision,
  fd_il integer,
  fd_v double precision,
  fd_d timestamp with time zone,
  fd_sn boolean NOT NULL DEFAULT false,
  CONSTRAINT doc101_pkey PRIMARY KEY (code, riga, subriga, t_prog),
  CONSTRAINT doc101_code_fkey1 FOREIGN KEY (code, riga, subriga)
      REFERENCES az001.doc110 (code, riga, subriga) MATCH SIMPLE
      ON UPDATE CASCADE ON DELETE CASCADE
)
WITH (OIDS=FALSE);

and this is the entire log file:

[0.060]conn=02E72C78, PGAPI_DriverConnect( in)='DSN=PostgreSQL Log;', fDriverCompletion=0
[0.082]DSN info: DSN='PostgreSQL Log',server='localhost',port='5432',dbase='muna',user='client',passwd='xxxxx'
[0.091]          onlyread='0',protocol='7.4',showoid='0',fakeoidindex='0',showsystable='0'
[0.097]          conn_settings='', conn_encoding='(null)'
[0.100]          translation_dll='',translation_option=''
[0.116]Driver Version='08.03.0300,200809260001' linking 1400 static Multithread library
[0.123]Global Options: fetch=100, socket=4096, unknown_sizes=2, max_varchar_size=255, max_longvarchar_size=8190
[0.131]                disable_optimizer=0, ksqo=1, unique_index=1, use_declarefetch=0
[0.133]                text_as_longvarchar=1, unknowns_as_longvarchar=0, bools_as_char=1 NAMEDATALEN=64
[0.136]                extra_systable_prefixes='dd_;', conn_settings='' conn_encoding=''
[0.544]    [ PostgreSQL version string = '8.3.4' ]
[0.545]    [ PostgreSQL version number = '8.3' ]
[0.565]conn=02E72C78, query='select oid, typbasetype from pg_type where typname = 'lo''
[0.586]    [ fetched 0 rows ]
[0.601]    [ Large Object oid = -999 ]
[0.602]    [ Client encoding = 'UTF8' (code = 6) ]
[0.613]conn=02E72C78, PGAPI_DriverConnect(out)='DSN=PostgreSQL Log;DATABASE=muna;SERVER=localhost;PORT=5432;UID=client;PWD=xxxxxx;SSLmode=disable;ReadOnly=0;Protocol=7.4-1;FakeOidIndex=0;ShowOidColumn=0;RowVersioning=0;ShowSystemTables=0;ConnSettings=;Fetch=100;Socket=4096;UnknownSizes=2;MaxVarcharSize=255;MaxLongVarcharSize=8190;Debug=1;CommLog=1;Optimizer=0;Ksqo=1;UseDeclareFetch=0;TextAsLongVarchar=1;UnknownsAsLongVarchar=0;BoolsAsChar=1;Parse=0;CancelAsFreeStmt=0;ExtraSysTablePrefixes=dd_;;LFConversion=1;UpdatableCursors=1;DisallowPremature=0;TrueIsMinus1=0;BI=0;ByteaAsLongVarBinary=1;UseServerSidePrepare=0;LowerCaseIdentifier=0;XaOpt=1'
[15.534]conn=02E72C78, query='select count(*) from az001.doc101 where (code,riga)=(1,1)
'
[15.552]    [ fetched 1 rows ]
[15.608]conn=02E72C78, query='show max_identifier_length'
[15.619]    [ fetched 1 rows ]
[15.660]STATEMENT ERROR: func=PGAPI_ColAttributes, desc='', errnum=30, errmsg='this request may be for MS SQL Server'
[15.663]                 ------------------------------------------------------------
[15.664]                 hdbc=02E72C78, stmt=02E77808, result=02E70EA0
[15.665]                 prepare=0, internal=0
[15.666]                 bindings=02E791E8, bindings_allocated=1
[15.667]                 parameters=00000000, parameters_allocated=0
[15.668]                 statement_type=0, statement='select count(*) from az001.doc101 where (code,riga)=(1,1)
'
[15.670]                 stmt_with_params='select count(*) from az001.doc101 where (code,riga)=(1,1)
'
[15.671]                 data_at_exec=-1, current_exec_param=-1, put_data=0
[15.672]                 currTuple=-1, current_col=-1, lobj_fd=-1
[15.673]                 maxRows=0, rowset_size=1, keyset_size=0, cursor_type=0, scroll_concurrency=1
[15.674]                 cursor_name='SQL_CUR02E77808'
[15.675]                 ----------------QResult Info -------------------------------
[15.677]                 fields=02E77BA8, backend_tuples=02E78E70, tupleField=48729712, conn=02E72C78
[15.678]                 fetch_count=0, num_total_rows=1, num_fields=1, cursor='(NULL)'
[15.679]                 message='(NULL)', command='SELECT', notice='(NULL)'
[15.680]                 status=100, inTuples=0
[15.681]CONN ERROR: func=PGAPI_ColAttributes, desc='', errnum=0, errmsg='(NULL)'
[15.683]            ------------------------------------------------------------
[15.685]            henv=02E72C38, conn=02E72C78, status=1, num_stmts=16
[15.687]            sock=02E70A08, stmts=02E70AE0, lobj_type=-999
[15.688]            ---------------- Socket Info -------------------------------
[15.690]            socket=552, reverse=0, errornumber=0, errormsg='(NULL)'
[15.692]            buffer_in=48715736, buffer_out=48719856
[15.692]            buffer_filled_in=76, buffer_filled_out=0, buffer_read_in=76
[15.735]CONN ERROR: func=SQLNativeSqlW, desc='', errnum=-2, errmsg='Sql string too large'
[15.737]            ------------------------------------------------------------
[15.738]            henv=02E72C38, conn=02E72C78, status=1, num_stmts=16
[15.739]            sock=02E70A08, stmts=02E70AE0, lobj_type=-999
[15.740]            ---------------- Socket Info -------------------------------
[15.741]            socket=552, reverse=0, errornumber=0, errormsg='(NULL)'
[15.743]            buffer_in=48715736, buffer_out=48719856
[15.743]            buffer_filled_in=76, buffer_filled_out=0, buffer_read_in=76
[15.772]conn=02E72C78, query='SELECT * FROM az001.doc101'


Thanks in advance, I apologize for my poor english.

Regards,

Moreno Dissegna
Developer
S.A. Software S.r.l.

Scrivi, parla e gioca con i tuoi amici! Scarica Messenger 2009!

Re: Show max_identifier_length causes heavy query execution

From
Ken Sell
Date:
Hi,

I’m the new connectivity engineer.

I’m attempting to recreate the problem, but haven’t seen the same results as you.
Can you send me the sequence of ODBC statements that cause the problem?

Thanks,

....Ken Sell


On 5/26/09 9:48 AM, "Moreno D." <moreno.d@hotmail.it> wrote:

Hello all,

I noticed that after executing a simple query that takes few milliseconds, the odbc driver executes another query which takes about 30 seconds to execute, because it scans a large table.
This is the part of the log file I think is showing the problem, see below for the entire log file:

[15.534]conn=02E72C78, query='select count(*) from az001.doc101 where (code,riga)=(1,1)
'
[15.552]    [ fetched 1 rows ]
[15.608]conn=02E72C78, query='show max_identifier_length'
[15.619]    [ fetched 1 rows ]
[15.660]STATEMENT ERROR: func=PGAPI_ColAttributes, desc='', errnum=30, errmsg='this request may be for MS SQL Server'

Then the driver executes the heavy query "SELECT * FROM az001.doc101", which retrieves about 1.5 million rows.

I came up with a workaround to avoid this problem, executing the query
SELECT (select count(*) from az001.doc101 where (code,riga)=(1,1))
instead of the original one, but this problem seems to appear in many points of the application, so i would like to find a better solution.

Here are some additional information,
the table structure is

CREATE TABLE az001.doc101
(
  code integer NOT NULL,
  riga integer NOT NULL,
  subriga integer NOT NULL,
  t_prog character varying(6) NOT NULL,
  fd_a character varying,
  fd_nd double precision,
  fd_il integer,
  fd_v double precision,
  fd_d timestamp with time zone,
  fd_sn boolean NOT NULL DEFAULT false,
  CONSTRAINT doc101_pkey PRIMARY KEY (code, riga, subriga, t_prog),
  CONSTRAINT doc101_code_fkey1 FOREIGN KEY (code, riga, subriga)
      REFERENCES az001.doc110 (code, riga, subriga) MATCH SIMPLE
      ON UPDATE CASCADE ON DELETE CASCADE
)
WITH (OIDS=FALSE);

and this is the entire log file:

[0.060]conn=02E72C78, PGAPI_DriverConnect( in)='DSN=PostgreSQL Log;', fDriverCompletion=0
[0.082]DSN info: DSN='PostgreSQL Log',server='localhost',port='5432',dbase='muna',user='client',passwd='xxxxx'
[0.091]          onlyread='0',protocol='7.4',showoid='0',fakeoidindex='0',showsystable='0'
[0.097]          conn_settings='', conn_encoding='(null)'
[0.100]          translation_dll='',translation_option=''
[0.116]Driver Version='08.03.0300,200809260001' linking 1400 static Multithread library
[0.123]Global Options: fetch=100, socket=4096, unknown_sizes=2, max_varchar_size=255, max_longvarchar_size=8190
[0.131]                disable_optimizer=0, ksqo=1, unique_index=1, use_declarefetch=0
[0.133]                text_as_longvarchar=1, unknowns_as_longvarchar=0, bools_as_char=1 NAMEDATALEN=64
[0.136]                extra_systable_prefixes='dd_;', conn_settings='' conn_encoding=''
[0.544]    [ PostgreSQL version string = '8.3.4' ]
[0.545]    [ PostgreSQL version number = '8.3' ]
[0.565]conn=02E72C78, query='select oid, typbasetype from pg_type where typname = 'lo''
[0.586]    [ fetched 0 rows ]
[0.601]    [ Large Object oid = -999 ]
[0.602]    [ Client encoding = 'UTF8' (code = 6) ]
[0.613]conn=02E72C78, PGAPI_DriverConnect(out)='DSN=PostgreSQL Log;DATABASE=muna;SERVER=localhost;PORT=5432;UID=client;PWD=xxxxxx;SSLmode=disable;ReadOnly=0;Protocol=7.4-1;FakeOidIndex=0;ShowOidColumn=0;RowVersioning=0;ShowSystemTables=0;ConnSettings=;Fetch=100;Socket=4096;UnknownSizes=2;MaxVarcharSize=255;MaxLongVarcharSize=8190;Debug=1;CommLog=1;Optimizer=0;Ksqo=1;UseDeclareFetch=0;TextAsLongVarchar=1;UnknownsAsLongVarchar=0;BoolsAsChar=1;Parse=0;CancelAsFreeStmt=0;ExtraSysTablePrefixes=dd_;;LFConversion=1;UpdatableCursors=1;DisallowPremature=0;TrueIsMinus1=0;BI=0;ByteaAsLongVarBinary=1;UseServerSidePrepare=0;LowerCaseIdentifier=0;XaOpt=1'
[15.534]conn=02E72C78, query='select count(*) from az001.doc101 where (code,riga)=(1,1)
'
[15.552]    [ fetched 1 rows ]
[15.608]conn=02E72C78, query='show max_identifier_length'
[15.619]    [ fetched 1 rows ]
[15.660]STATEMENT ERROR: func=PGAPI_ColAttributes, desc='', errnum=30, errmsg='this request may be for MS SQL Server'
[15.663]                 ------------------------------------------------------------
[15.664]                 hdbc=02E72C78, stmt=02E77808, result=02E70EA0
[15.665]                 prepare=0, internal=0
[15.666]                 bindings=02E791E8, bindings_allocated=1
[15.667]                 parameters=00000000, parameters_allocated=0
[15.668]                 statement_type=0, statement='select count(*) from az001.doc101 where (code,riga)=(1,1)
'
[15.670]                 stmt_with_params='select count(*) from az001.doc101 where (code,riga)=(1,1)
'
[15.671]                 data_at_exec=-1, current_exec_param=-1, put_data=0
[15.672]                 currTuple=-1, current_col=-1, lobj_fd=-1
[15.673]                 maxRows=0, rowset_size=1, keyset_size=0, cursor_type=0, scroll_concurrency=1
[15.674]                 cursor_name='SQL_CUR02E77808'
[15.675]                 ----------------QResult Info -------------------------------
[15.677]                 fields=02E77BA8, backend_tuples=02E78E70, tupleField=48729712, conn=02E72C78
[15.678]                 fetch_count=0, num_total_rows=1, num_fields=1, cursor='(NULL)'
[15.679]                 message='(NULL)', command='SELECT', notice='(NULL)'
[15.680]                 status=100, inTuples=0
[15.681]CONN ERROR: func=PGAPI_ColAttributes, desc='', errnum=0, errmsg='(NULL)'
[15.683]            ------------------------------------------------------------
[15.685]            henv=02E72C38, conn=02E72C78, status=1, num_stmts=16
[15.687]            sock=02E70A08, stmts=02E70AE0, lobj_type=-999
[15.688]            ---------------- Socket Info -------------------------------
[15.690]            socket=552, reverse=0, errornumber=0, errormsg='(NULL)'
[15.692]            buffer_in=48715736, buffer_out=48719856
[15.692]            buffer_filled_in=76, buffer_filled_out=0, buffer_read_in=76
[15.735]CONN ERROR: func=SQLNativeSqlW, desc='', errnum=-2, errmsg='Sql string too large'
[15.737]            ------------------------------------------------------------
[15.738]            henv=02E72C38, conn=02E72C78, status=1, num_stmts=16
[15.739]            sock=02E70A08, stmts=02E70AE0, lobj_type=-999
[15.740]            ---------------- Socket Info -------------------------------
[15.741]            socket=552, reverse=0, errornumber=0, errormsg='(NULL)'
[15.743]            buffer_in=48715736, buffer_out=48719856
[15.743]            buffer_filled_in=76, buffer_filled_out=0, buffer_read_in=76
[15.772]conn=02E72C78, query='SELECT * FROM az001.doc101'


Thanks in advance, I apologize for my poor english.

Regards,

Moreno Dissegna
Developer
S.A. Software S.r.l.

Scrivi, parla e gioca con i tuoi amici! Scarica Messenger 2009! <http://messenger.it/gioca.aspx>

Re: Show max_identifier_length causes heavy query execution

From
"Albe Laurenz"
Date:
Moreno D. wrote:
> I noticed that after executing a simple query that takes few
> milliseconds, the odbc driver executes another query which
> takes about 30 seconds to execute, because it scans a large table.
>
[...]
>
> Then the driver executes the heavy query "SELECT * FROM
> az001.doc101", which retrieves about 1.5 million rows.
>
[...]
>
> and this is the entire log file:
>
[...]
> [0.613]conn=02E72C78,
> PGAPI_DriverConnect(out)='DSN=PostgreSQL
> Log;DATABASE=muna;SERVER=localhost;PORT=5432;UID=client;PWD=xx
> xxxx;SSLmode=disable;ReadOnly=0;Protocol=7.4-1;FakeOidIndex=0;
> ShowOidColumn=0;RowVersioning=0;ShowSystemTables=0;ConnSetting
> s=;Fetch=100;Socket=4096;UnknownSizes=2;MaxVarcharSize=255;Max
> LongVarcharSize=8190;Debug=1;CommLog=1;Optimizer=0;Ksqo=1;UseD
> eclareFetch=0;TextAsLongVarchar=1;UnknownsAsLongVarchar=0;Bool
> sAsChar=1;Parse=0;CancelAsFreeStmt=0;ExtraSysTablePrefixes=dd_
> ;;LFConversion=1;UpdatableCursors=1;DisallowPremature=0;TrueIs
> Minus1=0;BI=0;ByteaAsLongVarBinary=1;UseServerSidePrepare=0;Lo
> werCaseIdentifier=0;XaOpt=1'

Without the ODBC statements it is difficult to say something conclusive,
but it *might* be that you have the same problem I had some time ago.

I see that you set "Parse=0" in your connection options.
This will result in the ODBC driver executing "SELECT *"-Statements
when it needs to get a column description.

If that is the cause of your problem, then setting "Parse=1"
will fix it.

In ODBC Administrator on Windows, this option is called "Parse Statements"
and can be found in the advanced options page 1 (Button "Datasource").

Yours,
Laurenz Albe

Re: Show max_identifier_length causes heavy query execution

From
Moreno D.
Date:
>If that is the cause of your problem, then setting "Parse=1"
>will fix it.
>
>In ODBC Administrator on Windows, this option is called "Parse Statements"
>and can be found in the advanced options page 1 (Button "Datasource").

Thank you Laurenz, setting "Parse=1" solved my problem!
Anyway, another problem appeared at another point in the application, i think this is the reason why that flag was
turnedoff. 
I simplified the query that gives the error, and obtained this :

    SELECT 1 as prog  UNION  SELECT code from aaa; --This one gives error

where aaa is a previously created table (If I create the table in the same statement, I don't get the error) :

    create table aaa(
    code integer   -- i tried also with character varying, it's the same
    );

If I write the two select in reverse order it works

    SELECT code as prog from aaa UNION  SELECT 1; --This one works

>Without the ODBC statements it is difficult to say something conclusive,

I attach below the log file, but I don't know what are the "ODBC statements"...Is it the "mylog" file? I'm using ADO
componentsto connect to the database, so I don't know very much about ODBC, I'm sorry.. 

I analized the mylog in both cases and noticed this difference:

(This is the correct one)
[2956-2.138]parse_statement: entering...
[2956-2.138]unquoted=1, quote=0, dquote=0, numeric=0, delim=' ', token='SELECT', ptr='code as prog from aaa UNION
SELECT1; 
'
[2956-2.140]SELECT
[2956-2.140]unquoted=1, quote=0, dquote=0, numeric=0, delim=' ', token='code', ptr='as prog from aaa UNION  SELECT 1;
'
[2956-2.142]blevel=0 btoken=SELECT in_dot=0 in_field=0 tbname=
[2956-2.143]reallocing at nfld=0
[2956-2.144]got field='code', dot='(null)'
[2956-2.144]unquoted=1, quote=0, dquote=0, numeric=0, delim=' ', token='as', ptr='prog from aaa UNION  SELECT 1;
'
[2956-2.146]blevel=0 btoken=code in_dot=0 in_field=1 tbname=
[2956-2.147]got AS
[2956-2.147]unquoted=1, quote=0, dquote=0, numeric=0, delim=' ', token='prog', ptr='from aaa UNION  SELECT 1;
'
[2956-2.149]blevel=0 btoken=as in_dot=0 in_field=1 tbname=
[2956-2.149]alias for field 'code' is 'prog'
[2956-2.150]unquoted=1, quote=0, dquote=0, numeric=0, delim=' ', token='from', ptr='aaa UNION  SELECT 1;
'
[2956-2.152]First [2956-2.152]FROM
[2956-2.152]unquoted=1, quote=0, dquote=0, numeric=0, delim=' ', token='aaa', ptr='UNION  SELECT 1;
'
[2956-2.153]got table = 'aaa'
[2956-2.154]unquoted=1, quote=0, dquote=0, numeric=0, delim=' ', token='UNION', ptr='SELECT 1;
'
[2956-2.155]UNION...                       <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<

(And this is the one that gives error)
[3244-1.937]parse_statement: entering...
[3244-1.937]unquoted=1, quote=0, dquote=0, numeric=0, delim=' ', token='SELECT', ptr='1 as prog  UNION  SELECT code
fromaaa; 
'
[3244-1.939]SELECT
[3244-1.939]unquoted=1, quote=0, dquote=0, numeric=1, delim=' ', token='1', ptr='as prog  UNION  SELECT code from aaa;
'
[3244-1.941]blevel=0 btoken=SELECT in_dot=0 in_field=0 tbname=
[3244-1.942]reallocing at nfld=0
[3244-1.942]**** got numeric: nfld = 0
[3244-1.943]got field='(null)', dot='(null)'
[3244-1.944]unquoted=1, quote=0, dquote=0, numeric=0, delim=' ', token='as', ptr='prog  UNION  SELECT code from aaa;
'
[3244-1.945]blevel=0 btoken=1 in_dot=0 in_field=1 tbname=
[3244-1.946]got AS
[3244-1.946]unquoted=1, quote=0, dquote=0, numeric=0, delim=' ', token='prog', ptr='UNION  SELECT code from aaa;
'
[3244-1.948]blevel=0 btoken=as in_dot=0 in_field=1 tbname=
[3244-1.948]alias for field '(null)' is 'prog'
[3244-1.949]unquoted=1, quote=0, dquote=0, numeric=0, delim=' ', token='UNION', ptr='SELECT code from aaa;
'
[3244-1.951]blevel=0 btoken=prog in_dot=0 in_field=0 tbname=prog
[3244-1.952]got field='UNION', dot='(null)' <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
[3244-1.953]unquoted=1, quote=0, dquote=0, numeric=0, delim=' ', token='SELECT', ptr='code from aaa;




This is psqlodbc_2008.log:
[0.007]Driver Version='08.03.0400,200811070001' linking 1400 static Multithread library
[0.015]Global Options: fetch=100, socket=4096, unknown_sizes=2, max_varchar_size=255, max_longvarchar_size=8190
[0.018]                disable_optimizer=0, ksqo=1, unique_index=1, use_declarefetch=0
[0.021]                text_as_longvarchar=1, unknowns_as_longvarchar=0, bools_as_char=1 NAMEDATALEN=64
[0.025]                extra_systable_prefixes='dd_;', conn_settings='' conn_encoding=''
[0.104]    [ PostgreSQL version string = '8.3.4' ]
[0.106]    [ PostgreSQL version number = '8.3' ]
[0.121]conn=031C2470, query='select oid, typbasetype from pg_type where typname = 'lo''
[0.139]    [ fetched 0 rows ]
[0.148]    [ Large Object oid = -999 ]
[0.150]    [ Client encoding = 'UTF8' (code = 6) ]
[0.161]conn=031C2470, PGAPI_DriverConnect(out)='DSN=PostgreSQL
Log;DATABASE=muna;SERVER=localhost;PORT=5432;UID=client;PWD=xxxxxx;SSLmode=disable;ReadOnly=0;Protocol=7.4-1;FakeOidIndex=0;ShowOidColumn=0;RowVersioning=0;ShowSystemTables=0;ConnSettings=;Fetch=100;Socket=4096;UnknownSizes=2;MaxVarcharSize=255;MaxLongVarcharSize=8190;Debug=1;CommLog=1;Optimizer=0;Ksqo=1;UseDeclareFetch=0;TextAsLongVarchar=1;UnknownsAsLongVarchar=0;BoolsAsChar=1;Parse=1;CancelAsFreeStmt=0;ExtraSysTablePrefixes=dd_;;LFConversion=1;UpdatableCursors=1;DisallowPremature=0;TrueIsMinus1=0;BI=0;ByteaAsLongVarBinary=1;UseServerSidePrepare=0;LowerCaseIdentifier=0;XaOpt=1'
[1.909]conn=031C2470, query='SELECT 1 as prog  UNION  SELECT code from aaa;
'
[1.922]    [ fetched 1 rows ]
[1.971]conn=031C2470, query='select current_schema()'
[1.989]    [ fetched 1 rows ]
[2.003]conn=031C2470, query='select nspname from pg_namespace n, pg_class c where c.relnamespace=n.oid and
c.oid='"aaa"'::regclass'
[2.024]    [ fetched 1 rows ]
[2.095]conn=031C2470, query='select n.nspname, c.relname, a.attname, a.atttypid, t.typname, a.attnum, a.attlen,
a.atttypmod,a.attnotnull, c.relhasrules, c.relkind, c.oid, d.adsrc from (((pg_catalog.pg_class c inner join
pg_catalog.pg_namespacen on n.oid = c.relnamespace and c.relname = E'aaa' and n.nspname = E'public') inner join
pg_catalog.pg_attributea on (not a.attisdropped) and a.attnum> 0 and a.attrelid = c.oid) inner join pg_catalog.pg_type
ton t.oid = a.atttypid) left outer join pg_attrdef d on a.atthasdef and d.adrelid = a.attrelid and d.adnum = a.attnum
orderby n.nspname, c.relname, attnum' 
[2.152]    [ fetched 1 rows ]
[2.289]PGAPI_Columns: table='aaa',field_name='code',type=23,name='int4'
[2.393]STATEMENT ERROR: func=PGAPI_DescribeCol, desc='Col#=1, #Cols=1,1 keys=2', errnum=13, errmsg='Invalid column
numberin DescribeCol.' 
[2.397]                 ------------------------------------------------------------
[2.398]                 hdbc=031C2470, stmt=031C6FD0, result=031C72E8
[2.399]                 prepare=0, internal=0
[2.400]                 bindings=031C89E0, bindings_allocated=1
[2.401]                 parameters=00000000, parameters_allocated=0
[2.402]                 statement_type=0, statement='SELECT 1 as prog  UNION  SELECT code from aaa;
'
[2.403]                 stmt_with_params='SELECT 1 as prog  UNION  SELECT code from aaa;
'
[2.405]                 data_at_exec=-1, current_exec_param=-1, put_data=0
[2.406]                 currTuple=-1, current_col=-1, lobj_fd=-1
[2.406]                 maxRows=0, rowset_size=1, keyset_size=0, cursor_type=0, scroll_concurrency=1
[2.407]                 cursor_name='SQL_CUR031C6FD0'
[2.408]                 ----------------QResult Info -------------------------------
[2.409]                 fields=031C7398, backend_tuples=031C8698, tupleField=52201112, conn=031C2470
[2.411]                 fetch_count=0, num_total_rows=1, num_fields=1, cursor='(NULL)'
[2.412]                 message='(NULL)', command='SELECT', notice='(NULL)'
[2.414]                 status=100, inTuples=0
[2.415]CONN ERROR: func=PGAPI_DescribeCol, desc='Col#=1, #Cols=1,1 keys=2', errnum=0, errmsg='(NULL)'
[2.418]            ------------------------------------------------------------
[2.419]            henv=031C0A70, conn=031C2470, status=1, num_stmts=16
[2.420]            sock=031C0AA0, stmts=031C0B68, lobj_type=-999
[2.421]            ---------------- Socket Info -------------------------------
[2.422]            socket=552, reverse=0, errornumber=0, errormsg='(NULL)'
[2.423]            buffer_in=52187072, buffer_out=52191176
[2.424]            buffer_filled_in=456, buffer_filled_out=0, buffer_read_in=456
[3.990]conn=031C2470, PGAPI_Disconnect

Regards,

Moreno Dissegna
Developer
S.A. Software S.r.l.






_________________________________________________________________
Più di 100 Emoticon gratis per il tuo Messenger!
http://intrattenimento.it.msn.com/emoticon

Re: Show max_identifier_length causes heavy query execution

From
Hiroshi Inoue
Date:
Moreno D. wrote:
>> If that is the cause of your problem, then setting "Parse=1"
>> will fix it.
>>
>> In ODBC Administrator on Windows, this option is called "Parse Statements"
>> and can be found in the advanced options page 1 (Button "Datasource").
>
> Thank you Laurenz, setting "Parse=1" solved my problem!
> Anyway, another problem appeared at another point in the application, i think this is the reason why that flag was
turnedoff. 

Please look at the attached documentation especially about Parse
  Statements, ServerSide Prepare or Disallow Premature options.

regards,
Hiroshi Inoue

Re: Show max_identifier_length causes heavy query execution

From
Moreno D.
Date:
>Please look at the attached documentation especially about Parse
> Statements, ServerSide Prepare or Disallow Premature options.
Thank you Hiroshi,
I solved my problems setting ServerSide Prepare=1, Parse statements=0 and Disallow premature=0.
Thanks for the documentation, the next time I have problems I will read it carefully.
I would also thank Laurenz and Ken for the help.

Moreno Dissegna
Developer
S.A. Software S.r.l.



Personalizza Messenger 2009. Scegli la grafica che ti rappresenta meglio!