Re: Analyzing foreign tables & memory problems - Mailing list pgsql-hackers

From Albe Laurenz
Subject Re: Analyzing foreign tables & memory problems
Date
Msg-id D960CB61B694CF459DCFB4B0128514C2049FCE83@exadv11.host.magwien.gv.at
Whole thread Raw
In response to Analyzing foreign tables & memory problems  ("Albe Laurenz" <laurenz.albe@wien.gv.at>)
List pgsql-hackers
Simon Riggs wrote:
>>> During ANALYZE, in analyze.c, functions compute_minimal_stats
>>> and compute_scalar_stats, values whose length exceed
>>> WIDTH_THRESHOLD (= 1024) are not used for calculating statistics
>>> other than that they are counted as "too wide rows" and assumed
>>> to be all different.
>>
>>> This works fine with regular tables; values exceeding that threshold
>>> don't get detoasted and won't consume excessive memory.
>>
>>> With foreign tables the situation is different.  Even though
>>> values exceeding WIDTH_THRESHOLD won't get used, the complete
>>> rows will be fetched from the foreign table.  This can easily
>>> exhaust maintenance_work_mem.
>>
>> I'm fairly skeptical that this is a real problem

> AFAIK its not possible to select all columns from an Oracle database.
> If you use an unqualified LONG column as part of the query then you
> get an error.
>
> So there are issues with simply requesting data for analysis.

To detail on the specific case of Oracle, I have given up on LONG
since a) it has been deprecated for a long time and
b) it is not possible to retrieve a LONG column unless you know
in advance how long it is.

But you can have several BLOB and CLOB columns in a table, each
of which can be arbitrarily large and can lead to the problem
I described.

Yours,
Laurenz Albe


pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Re: Torn page hazard in ginRedoUpdateMetapage()
Next
From: "Albe Laurenz"
Date:
Subject: Re: Analyzing foreign tables & memory problems