Thread: Calculage avg. width when operator = is missing
Hi Hackers,
I've recently stumbled upon a problem with table bloat estimation in case there are columns of type JSON.
The quick bloat estimation queries use sum over pg_statistic.stawidth of table's columns, but in case of JSON the corresponding entry is never created by the ANALYZE command due to equality comparison operator missing. I understand why there is no such operator defined for this particular type, but shouldn't we still try to produce meaningful average width estimation?
In my case the actual bloat is around 40% as verified with pgstattuple, while the bloat reported by quick estimate can be between 75% and 95%(!) in three instances of this problem. We're talking about some hundreds of GB of miscalculation.
Attached patch against master makes the std_typanalyze still try to compute the minimal stats even if there is no "=" operator. Makes sense?
I could also find this report in archives that talks about similar problem, but due to all values being over the analyze threshold:
I think we could try harder, otherwise any estimate relying on average width can be way off in such cases.
--
Alex
Attachment
On 09/22/2015 12:16 PM, Shulgin, Oleksandr wrote: > Hi Hackers, > > I've recently stumbled upon a problem with table bloat estimation in > case there are columns of type JSON. > > The quick bloat estimation queries use sum over pg_statistic.stawidth > of table's columns, but in case of JSON the corresponding entry is > never created by the ANALYZE command due to equality comparison > operator missing. I understand why there is no such operator defined > for this particular type, but shouldn't we still try to produce > meaningful average width estimation? > > In my case the actual bloat is around 40% as verified with > pgstattuple, while the bloat reported by quick estimate can be between > 75% and 95%(!) in three instances of this problem. We're talking > about some hundreds of GB of miscalculation. > > Attached patch against master makes the std_typanalyze still try to > compute the minimal stats even if there is no "=" operator. Makes sense? > > I could also find this report in archives that talks about similar > problem, but due to all values being over the analyze threshold: > > http://www.postgresql.org/message-id/flat/12480.1389370514@sss.pgh.pa.us#12480.1389370514@sss.pgh.pa.us > > I think we could try harder, otherwise any estimate relying on average > width can be way off in such cases. Yes, "/revenons/ à /nos moutons/." You can set up text based comparison ops fairly easily for json - you just need to be aware of the limitations. See https://gist.github.com/adunstan/32ad224d7499d2603708 But I agree we should be able to do some analysis of types without comparison ops. cheers andrew
<p dir="ltr">On Sep 22, 2015 8:58 PM, "Andrew Dunstan" <<a href="mailto:andrew@dunslane.net">andrew@dunslane.net</a>>wrote:<br /> ><br /> ><br /> ><br /> > On 09/22/201512:16 PM, Shulgin, Oleksandr wrote:<br /> >><br /> >> Hi Hackers,<br /> >><br /> >> I'verecently stumbled upon a problem with table bloat estimation in case there are columns of type JSON.<br /> >><br/> >> The quick bloat estimation queries use sum over pg_statistic.stawidth of table's columns, but in caseof JSON the corresponding entry is never created by the ANALYZE command due to equality comparison operator missing. I understand why there is no such operator defined for this particular type, but shouldn't we still try to producemeaningful average width estimation?<br /> >><br /> >> In my case the actual bloat is around 40% as verifiedwith pgstattuple, while the bloat reported by quick estimate can be between 75% and 95%(!) in three instances ofthis problem. We're talking about some hundreds of GB of miscalculation.<br /> >><br /> >> Attached patchagainst master makes the std_typanalyze still try to compute the minimal stats even if there is no "=" operator. Makessense?<br /> >><br /> >> I could also find this report in archives that talks about similar problem, butdue to all values being over the analyze threshold:<br /> >><br /> >> <a href="http://www.postgresql.org/message-id/flat/12480.1389370514@sss.pgh.pa.us#12480.1389370514@sss.pgh.pa.us">http://www.postgresql.org/message-id/flat/12480.1389370514@sss.pgh.pa.us#12480.1389370514@sss.pgh.pa.us</a><br />>><br /> >> I think we could try harder, otherwise any estimate relying on average width can be way off insuch cases.<br /> ><br /> > Yes, "/revenons/ à /nos moutons/." You can set up text based comparison ops fairly easilyfor json - you just need to be aware of the limitations. See <a href="https://gist.github.com/adunstan/32ad224d7499d2603708">https://gist.github.com/adunstan/32ad224d7499d2603708</a><p dir="ltr">Yes,I've already tried this approach and have found that analyze performance degrades an order of magnitude dueto sql-level function overhead and casts to text. In my tests, from 200ms to 2000ms with btree ops on a default sampleof 30,000 rows.<p dir="ltr">Should have mentioned that.<p dir="ltr">There is a very hacky way to substitute bttextcmpfor the sort support function after defining the opclass by updating pg_amproc, buy I would rather avoid that. :-)<p dir="ltr">--<br /> Alex<br />
Shulgin, Oleksandr wrote: > On Sep 22, 2015 8:58 PM, "Andrew Dunstan" <andrew@dunslane.net> wrote: > > Yes, "/revenons/ à /nos moutons/." You can set up text based comparison > > ops fairly easily for json - you just need to be aware of the limitations. > > See https://gist.github.com/adunstan/32ad224d7499d2603708 > > Yes, I've already tried this approach and have found that analyze > performance degrades an order of magnitude due to sql-level function > overhead and casts to text. In my tests, from 200ms to 2000ms with btree > ops on a default sample of 30,000 rows. You should be able to create a C function json_cmp() that simply calls bttextcmp() internally, and C functions for each operator using that one, in the same way. In any case I think your patch is a good starting point. -- Álvaro Herrera http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Alvaro Herrera <alvherre@2ndquadrant.com> writes: > In any case I think your patch is a good starting point. The comments seemed to need some wordsmithing, but I think this is probably basically a good idea; we've had similar complaints before about some other equality-less datatypes, such as point. Should we consider this HEAD-only, or a back-patchable bug fix? Or perhaps compromise on HEAD + 9.5? regards, tom lane
Tom Lane wrote: > Should we consider this HEAD-only, or a back-patchable bug fix? > Or perhaps compromise on HEAD + 9.5? It looks like a bug to me, but I think it might destabilize approved execution plans(*), so it may not be such a great idea to back patch branches that are already released. I think HEAD + 9.5 is good. (*) I hear there are even applications where queries and their approved execution plans are kept in a manifest, and plans that deviate from that raise all kinds of alarms. I have never seen such a thing ... -- Álvaro Herrera http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On Tue, Sep 22, 2015 at 11:17 PM, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:
Shulgin, Oleksandr wrote:
> On Sep 22, 2015 8:58 PM, "Andrew Dunstan" <andrew@dunslane.net> wrote:
> > Yes, "/revenons/ à /nos moutons/." You can set up text based comparison
> > ops fairly easily for json - you just need to be aware of the limitations.
> > See https://gist.github.com/adunstan/32ad224d7499d2603708
>
> Yes, I've already tried this approach and have found that analyze
> performance degrades an order of magnitude due to sql-level function
> overhead and casts to text. In my tests, from 200ms to 2000ms with btree
> ops on a default sample of 30,000 rows.
You should be able to create a C function json_cmp() that simply calls
bttextcmp() internally, and C functions for each operator using that
one, in the same way.
Yes, but I didn't try this because of the requirement to compile/install/maintain the externally loadable module. If I could just use CREATE FUNCTION on a postgres' internal function such as texteq or bttextcmp (with obj_file of NULL, for example) I would definitely do that. :-)
--
Alex
On Tue, Sep 22, 2015 at 11:56 PM, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:
Tom Lane wrote:
> Should we consider this HEAD-only, or a back-patchable bug fix?
> Or perhaps compromise on HEAD + 9.5?
It looks like a bug to me, but I think it might destabilize approved
execution plans(*), so it may not be such a great idea to back patch
branches that are already released. I think HEAD + 9.5 is good.
(*) I hear there are even applications where queries and their approved
execution plans are kept in a manifest, and plans that deviate from that
raise all kinds of alarms. I have never seen such a thing ...
Ugh. Anyway, do you expect any plans to change only due to avg. width estimation being different? Why would that be so?
--
Alex
On Tue, Sep 22, 2015 at 11:43 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Alvaro Herrera <alvherre@2ndquadrant.com> writes:
> In any case I think your patch is a good starting point.
The comments seemed to need some wordsmithing, but I think this is
probably basically a good idea; we've had similar complaints before
about some other equality-less datatypes, such as point.
Should we consider this HEAD-only, or a back-patchable bug fix?
Or perhaps compromise on HEAD + 9.5?
I failed to realize that the complaint I've referred to regarding all too wide samples was addressed back then by this commit: 6286526207d53e5b31968103adb89b4c9cd21499
For what it's worth, that time the decision was "This has been like this since roughly neolithic times, so back-patch to all supported branches." Does the same logic not apply here?
--
Alex
"Shulgin, Oleksandr" <oleksandr.shulgin@zalando.de> writes: > On Tue, Sep 22, 2015 at 11:56 PM, Alvaro Herrera <alvherre@2ndquadrant.com> > wrote: >> It looks like a bug to me, but I think it might destabilize approved >> execution plans(*), so it may not be such a great idea to back patch >> branches that are already released. I think HEAD + 9.5 is good. >> >> (*) I hear there are even applications where queries and their approved >> execution plans are kept in a manifest, and plans that deviate from that >> raise all kinds of alarms. I have never seen such a thing ... > Ugh. Anyway, do you expect any plans to change only due to avg. width > estimation being different? Why would that be so? Certainly, eg it could affect a decision about whether to use a hash join or hash aggregation through changing the planner's estimate of the required hashtable size. We wouldn't be bothering to track that data if it didn't affect plans. Personally I think Alvaro's position is unduly conservative: to the extent that plans change it'd likely be for the better. But I'm not excited enough to fight hard about it. regards, tom lane
Tom Lane wrote: > Personally I think Alvaro's position is unduly conservative: to the extent > that plans change it'd likely be for the better. But I'm not excited > enough to fight hard about it. I don't really care enough. We have received some complaints about keeping plans stable, but maybe it's okay. -- Álvaro Herrera http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On Wed, Sep 23, 2015 at 3:21 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
"Shulgin, Oleksandr" <oleksandr.shulgin@zalando.de> writes:
> On Tue, Sep 22, 2015 at 11:56 PM, Alvaro Herrera <alvherre@2ndquadrant.com>
> wrote:
>> It looks like a bug to me, but I think it might destabilize approved
>> execution plans(*), so it may not be such a great idea to back patch
>> branches that are already released. I think HEAD + 9.5 is good.
>>
>> (*) I hear there are even applications where queries and their approved
>> execution plans are kept in a manifest, and plans that deviate from that
>> raise all kinds of alarms. I have never seen such a thing ...
> Ugh. Anyway, do you expect any plans to change only due to avg. width
> estimation being different? Why would that be so?
Certainly, eg it could affect a decision about whether to use a hash join
or hash aggregation through changing the planner's estimate of the
required hashtable size. We wouldn't be bothering to track that data if
it didn't affect plans.
Personally I think Alvaro's position is unduly conservative: to the extent
that plans change it'd likely be for the better. But I'm not excited
enough to fight hard about it.
Yeah, I can see now, as I was studying the hash node code today intensively for an unrelated reason.
I also believe that given that we are going to have more accurate stats, the plan changes in this case hopefully are a good thing.
--
Alex
Alvaro Herrera <alvherre@2ndquadrant.com> writes: > Tom Lane wrote: >> Personally I think Alvaro's position is unduly conservative: to the extent >> that plans change it'd likely be for the better. But I'm not excited >> enough to fight hard about it. > I don't really care enough. We have received some complaints about > keeping plans stable, but maybe it's okay. The other side of the coin is that there haven't been so many requests for changing this; more than just this one, but not a groundswell. So 9.5 only seems like a good compromise unless we get more votes for back-patch. I reviewed the patch and concluded that it would be better to split compute_minimal_stats into two functions instead of sprinkling it so liberally with if's. So I did that and pushed it. regards, tom lane
On Thu, Sep 24, 2015 at 12:30 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Alvaro Herrera <alvherre@2ndquadrant.com> writes:
> Tom Lane wrote:
>> Personally I think Alvaro's position is unduly conservative: to the extent
>> that plans change it'd likely be for the better. But I'm not excited
>> enough to fight hard about it.
> I don't really care enough. We have received some complaints about
> keeping plans stable, but maybe it's okay.
The other side of the coin is that there haven't been so many requests for
changing this; more than just this one, but not a groundswell. So 9.5
only seems like a good compromise unless we get more votes for back-patch.
I reviewed the patch and concluded that it would be better to split
compute_minimal_stats into two functions instead of sprinkling it so
liberally with if's. So I did that and pushed it.
Thanks, I was not really happy about all the checks because some of them were rather implicit (e.g. num_mcv being 0 due to track being NULL, etc.). Adding this as a separate function makes me feel safer.
--
Alex