Re: Analyzing foreign tables & memory problems - Mailing list pgsql-hackers

From Albe Laurenz
Subject Re: Analyzing foreign tables & memory problems
Date
Msg-id D960CB61B694CF459DCFB4B0128514C2049FCE84@exadv11.host.magwien.gv.at
Whole thread Raw
In response to Analyzing foreign tables & memory problems  ("Albe Laurenz" <laurenz.albe@wien.gv.at>)
List pgsql-hackers
Tom Lane wrote:
>>> I'm fairly skeptical that this is a real problem, and would prefer not
>>> to complicate wrappers until we see some evidence from the field that
>>> it's worth worrying about.

>> If I have a table with 100000 rows and default_statistics_target
>> at 100, then a sample of 30000 rows will be taken.

>> If each row contains binary data of 1MB (an Image), then the
>> data structure returned will use about 30 GB of memory, which
>> will probably exceed maintenance_work_mem.

>> Or is there a flaw in my reasoning?

> Only that I don't believe this is a real-world scenario for a foreign
> table.  If you have a foreign table in which all, or even many, of the
> rows are that wide, its performance is going to suck so badly that
> you'll soon look for a different schema design anyway.

Of course it wouldn't work well to SELECT * from such a foreign table,
but it would work well enough to get one or a few rows at a time,
which is probably such a table's purpose in life anyway.

> I don't want to complicate FDWs for this until it's an actual bottleneck
> in real applications, which it may never be, and certainly won't be
> until we've gone through a few rounds of performance refinement for
> basic operations.

I agree that it may not be the right thing to do something invasive
to solve an anticipated problem that may never be one.

So scrap my second idea.  But I think that exposing WIDTH_THRESHOLD
wouldn't be unreasonable, would it?

Yours,
Laurenz Albe


pgsql-hackers by date:

Previous
From: "Albe Laurenz"
Date:
Subject: Re: Analyzing foreign tables & memory problems
Next
From: Simon Riggs
Date:
Subject: Re: Future In-Core Replication