On Sat, Nov 4, 2017 at 4:43 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > Paul Ramsey <pramsey@cleverelephant.ca> writes: >>> Whether I get a parallel aggregate seems entirely determined by the number >>> of rows, not the cost of preparing those rows. > >> This is true, as far as I can tell and unfortunate. Feeding tables with >> 100ks of rows, I get parallel plans, feeding 10ks of rows, never do, no >> matter how costly the work going on within. That's true of changing costs >> on the subquery select list, and on the aggregate transfn. > > This sounds like it might be the same issue being discussed in > > https://www.postgresql.org/message-id/flat/CAMkU=1ycXNipvhWuweUVpKuyu6SpNjF=yHWu4c4US5JgVGxtZQ@mail.gmail.com >
Thanks Tom, Amit; yes, this issue (expensive things in target lists not affecting plans) seems like what I'm talking about in this particular case and something that shows up a lot in PostGIS use cases: a function on a target list like ST_Buffer() or ST_Intersection() will be a couple orders of magnitude more expensive than anything in the filters.
I have rebased the patch being discussed on that thread.
Paul, you might want to once check with the recent patch [1] posted on the thread mentioned by Tom.