Re: Analyzing foreign tables & memory problems - Mailing list pgsql-hackers

From Albe Laurenz
Subject Re: Analyzing foreign tables & memory problems
Date
Msg-id D960CB61B694CF459DCFB4B0128514C207D4F9DA@exadv11.host.magwien.gv.at
Whole thread Raw
In response to Re: Analyzing foreign tables & memory problems  ("Albe Laurenz" <laurenz.albe@wien.gv.at>)
Responses Re: Analyzing foreign tables & memory problems
List pgsql-hackers
I wrote:
> Noah Misch wrote:
>>> During ANALYZE, in analyze.c, functions compute_minimal_stats
>>> and compute_scalar_stats, values whose length exceed
>>> WIDTH_THRESHOLD (= 1024) are not used for calculating statistics
>>> other than that they are counted as "too wide rows" and assumed
>>> to be all different.
>>>
>>> This works fine with regular tables;

>>> With foreign tables the situation is different.  Even though
>>> values exceeding WIDTH_THRESHOLD won't get used, the complete
>>> rows will be fetched from the foreign table.  This can easily
>>> exhaust maintenance_work_mem.

>>> I can think of two remedies:
>>> 1) Expose WIDTH_THRESHOLD in commands/vacuum.h and add documentation
>>>    so that the authors of foreign data wrappers are aware of the
>>>    problem and can avoid it on their side.
>>>    This would be quite simple.

>> Seems reasonable.  How would the FDW return an indication that a
value was
>> non-NULL but removed due to excess width?
>
> The FDW would return a value of length WIDTH_THRESHOLD+1 that is
> long enough to be recognized as too long, but not long enough to
> cause a problem.

Here is a simple patch for that.

Yours,
Laurenz Albe

Attachment

pgsql-hackers by date:

Previous
From: Hannu Krosing
Date:
Subject: Re: JSON in 9.2 - Could we have just one to_json() function instead of two separate versions ?
Next
From: Jeroen Vermeulen
Date:
Subject: Re: extending relations more efficiently