Re: max_stack_depth problem though query is substantially smaller - Mailing list pgsql-general

From Tom Lane
Subject Re: max_stack_depth problem though query is substantially smaller
Date
Msg-id 31448.1460215551@sss.pgh.pa.us
Whole thread Raw
In response to max_stack_depth problem though query is substantially smaller  ("Charles Clavadetscher" <clavadetscher@swisspug.org>)
Responses Re: max_stack_depth problem though query is substantially smaller
List pgsql-general
"Bannert  Matthias" <bannert@kof.ethz.ch> writes:
> [ very deep stack of parser transformExprRecurse calls ]

> #20137 0x00007fe7fb80ab8c in pg_analyze_and_rewrite (parsetree=parsetree@entry=0x7fe7fffdb2a0,
query_string=query_string@entry=0x7fe7fdf606b0"INSERT INTO ts_updates(ts_key, ts_data, ts_frequency) VALUES
('some_id.sector_all.news_all_d',hstore('1900-01-01','-0.395131869823009')||hstore('1900-01-02','-0.395131869823009')||hstore('1"...,
paramTypes=paramTypes@entry=0x0,numParams=numParams@entry=0) at
/build/postgresql-9.3-G1RSAD/postgresql-9.3-9.3.11/build/../src/backend/tcop/postgres.c:640

The SQL fragment we can see here suggests that your "40K entry hstore" is
getting built up by stringing together 40K hstore concatenation operators.
Don't do that.  Even without the parser stack depth issue, it's uselessly
inefficient.  I presume you're generating this statement mechanically,
not by hand, so you could equally well have the app emit

'1900-01-01 => -0.395131869823009, 1900-01-02 => -0.395131869823009, ...'::hstore

which would look like a single hstore literal to the parser, and be
processed much more quickly.

If you insist on emitting SQL statements that have operators nested
to such depths, then yes you'll need to increase max_stack_depth to
whatever it takes to allow it.

            regards, tom lane


pgsql-general by date:

Previous
From: Tom Lane
Date:
Subject: Re: Really unique session ID - PID + connection timestamp?
Next
From: Michael Nolan
Date:
Subject: Re: Bypassing NULL elements in row_to_json function