Re: Analyzing foreign tables & memory problems - Mailing list pgsql-hackers

From Tom Lane
Subject Re: Analyzing foreign tables & memory problems
Date
Msg-id 27717.1335795870@sss.pgh.pa.us
Whole thread Raw
In response to Analyzing foreign tables & memory problems  ("Albe Laurenz" <laurenz.albe@wien.gv.at>)
Responses Re: Analyzing foreign tables & memory problems
Re: Analyzing foreign tables & memory problems
List pgsql-hackers
"Albe Laurenz" <laurenz.albe@wien.gv.at> writes:
> During ANALYZE, in analyze.c, functions compute_minimal_stats
> and compute_scalar_stats, values whose length exceed
> WIDTH_THRESHOLD (= 1024) are not used for calculating statistics
> other than that they are counted as "too wide rows" and assumed
> to be all different.

> This works fine with regular tables; values exceeding that threshold
> don't get detoasted and won't consume excessive memory.

> With foreign tables the situation is different.  Even though
> values exceeding WIDTH_THRESHOLD won't get used, the complete
> rows will be fetched from the foreign table.  This can easily
> exhaust maintenance_work_mem.

I'm fairly skeptical that this is a real problem, and would prefer not
to complicate wrappers until we see some evidence from the field that
it's worth worrying about.  The WIDTH_THRESHOLD logic was designed a
dozen years ago when common settings for work_mem were a lot smaller
than today.  Moreover, to my mind it's always been about avoiding
detoasting operations as much as saving memory, and we don't have
anything equivalent to that consideration in foreign data wrappers.
        regards, tom lane


pgsql-hackers by date:

Previous
From: Alvaro Herrera
Date:
Subject: Re: [PATCH] Allow breaking out of hung connection attempts
Next
From: "Kevin Grittner"
Date:
Subject: Re: default_transaction_isolation = serializable causes crash under Hot Standby