[GENERAL] How does Postgres estimate the memory needed for sorting/aggregating - Mailing list pgsql-general

From Thomas Kellerer
Subject [GENERAL] How does Postgres estimate the memory needed for sorting/aggregating
Date
Msg-id o6b3j9$evt$1@blaine.gmane.org
Whole thread Raw
Responses Re: [GENERAL] How does Postgres estimate the memory needed forsorting/aggregating  (Tomas Vondra <tomas.vondra@2ndquadrant.com>)
Re: [GENERAL] How does Postgres estimate the memory needed forsorting/aggregating  (John R Pierce <pierce@hogranch.com>)
List pgsql-general
There was a question on dba.stackexchange recently:

    http://dba.stackexchange.com/a/162117/1822

That question (and the answer) deals with performance difference of a query caused by the _declared_ length of a
VARCHARcolumn in SQL Server (everything else being equal - especially the actual data length) 

For the curios: it does make a (big) difference in performance if you declare varchar(100) or varchar(2000) in SQL
Server- something that really surprised me. 

The difference in performance in SQL Servers seems to be caused by SQL Server's optimizer that uses the _declared_
lengthof a column to estimate the memory needed for the aggregation (or sorting). 

Now, we all know that there is no performance difference whatsoever for varchar columns regardless of the declared
length.

In one of the comments, to that answer the question was asked how Postgres knows how much memory it needs to allocate
todo the aggregation. 

I guess this is based on the column statistics stored in pg_stats, but I am not sure:

So here is my question: how does Postgres estimate/know the memory needed for the aggregation? Or does it dynamically
resizethe memory if the initial assumption was wrong? 

Thomas

pgsql-general by date:

Previous
From: ProPAAS DBA
Date:
Subject: Re: [GENERAL] psql only works with -h (even localhost)
Next
From: Tomas Vondra
Date:
Subject: Re: [GENERAL] How does Postgres estimate the memory needed forsorting/aggregating