Re: [HACKERS] Block level parallel vacuum - Mailing list pgsql-hackers
From | Amit Kapila |
---|---|
Subject | Re: [HACKERS] Block level parallel vacuum |
Date | |
Msg-id | CAA4eK1J=+rSHAy3ahFk0d-9R+fcP7nZdecsVP1Rw5T80VngCLQ@mail.gmail.com Whole thread Raw |
In response to | Re: [HACKERS] Block level parallel vacuum (Masahiko Sawada <masahiko.sawada@2ndquadrant.com>) |
Responses |
Re: [HACKERS] Block level parallel vacuum
|
List | pgsql-hackers |
On Tue, Jan 21, 2020 at 12:11 PM Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote: > > On Tue, 21 Jan 2020 at 15:35, Amit Kapila <amit.kapila16@gmail.com> wrote: > > > > On Tue, Jan 21, 2020 at 11:30 AM Andres Freund <andres@anarazel.de> wrote: > > > > > > Hi, > > > > > > On 2020-01-20 09:09:35 +0530, Amit Kapila wrote: > > > > Pushed, after fixing these two comments. > > > > > > When attempting to vacuum a large table I just got: > > > > > > postgres=# vacuum FREEZE ; > > > ERROR: invalid memory alloc request size 1073741828 > > > > > > #0 palloc (size=1073741828) at /mnt/tools/src/postgresql/src/backend/utils/mmgr/mcxt.c:959 > > > #1 0x000056452cc45cac in lazy_space_alloc (vacrelstats=0x56452e5ab0e8, vacrelstats=0x56452e5ab0e8, relblocks=24686152) > > > at /mnt/tools/src/postgresql/src/backend/access/heap/vacuumlazy.c:2741 > > > #2 lazy_scan_heap (aggressive=true, nindexes=1, Irel=0x56452e5ab1c8, vacrelstats=<optimized out>, params=0x7ffdf8c00290,onerel=<optimized out>) > > > at /mnt/tools/src/postgresql/src/backend/access/heap/vacuumlazy.c:786 > > > #3 heap_vacuum_rel (onerel=<optimized out>, params=0x7ffdf8c00290, bstrategy=<optimized out>) > > > at /mnt/tools/src/postgresql/src/backend/access/heap/vacuumlazy.c:472 > > > #4 0x000056452cd8b42c in table_relation_vacuum (bstrategy=<optimized out>, params=0x7ffdf8c00290, rel=0x7fbcdff1e248) > > > at /mnt/tools/src/postgresql/src/include/access/tableam.h:1450 > > > #5 vacuum_rel (relid=16454, relation=<optimized out>, params=params@entry=0x7ffdf8c00290) at /mnt/tools/src/postgresql/src/backend/commands/vacuum.c:1882 > > > > > > Looks to me that the calculation moved into compute_max_dead_tuples() > > > continues to use use an allocation ceiling > > > maxtuples = Min(maxtuples, MaxAllocSize / sizeof(ItemPointerData)); > > > but the actual allocation now is > > > > > > #define SizeOfLVDeadTuples(cnt) \ > > > add_size((offsetof(LVDeadTuples, itemptrs)), \ > > > mul_size(sizeof(ItemPointerData), cnt)) > > > > > > i.e. the overhead of offsetof(LVDeadTuples, itemptrs) is not taken into > > > account. > > > > > > > Right, I think we need to take into account in both the places in > > compute_max_dead_tuples(): > > > > maxtuples = (vac_work_mem * 1024L) / sizeof(ItemPointerData); > > .. > > maxtuples = Min(maxtuples, MaxAllocSize / sizeof(ItemPointerData)); > > > > > > Agreed. Attached patch should fix this issue. > if (useindex) { - maxtuples = (vac_work_mem * 1024L) / sizeof(ItemPointerData); + maxtuples = ((vac_work_mem * 1024L) - SizeOfLVDeadTuplesHeader) / sizeof(ItemPointerData); SizeOfLVDeadTuplesHeader is not defined by patch. Do you think it makes sense to add a comment here about the calculation? -- With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com
pgsql-hackers by date: