Re: [HACKERS] Vacuum: allow usage of more than 1GB of work mem - Mailing list pgsql-hackers

From Claudio Freire
Subject Re: [HACKERS] Vacuum: allow usage of more than 1GB of work mem
Date
Msg-id CAGTBQpZUbCxBu=Ckow3ydFFPqrRFQ5NxGUQExgK7d=9N24bmYg@mail.gmail.com
Whole thread Raw
In response to Re: [HACKERS] Vacuum: allow usage of more than 1GB of work mem  (Alvaro Herrera <alvherre@alvh.no-ip.org>)
Responses Re: [HACKERS] Vacuum: allow usage of more than 1GB of work mem  (Claudio Freire <klaussfreire@gmail.com>)
List pgsql-hackers
On Wed, Feb 7, 2018 at 11:29 PM, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:
> Claudio Freire wrote:
>> On Wed, Feb 7, 2018 at 8:52 PM, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:
>> >> Waiting as you say would be akin to what the patch does by putting
>> >> vacuum on its own parallel group.
>> >
>> > I don't think it's the same.  We don't need to wait until all the
>> > concurrent tests are done -- we only need to wait until the transactions
>> > that were current when the delete finished are done, which is very
>> > different since each test runs tons of small transactions rather than
>> > one single big transaction.
>>
>> Um... maybe "lock pg_class" ?
>
> I was thinking in first doing
>   SELECT array_agg(DISTINCT virtualtransaction) vxids
>     FROM pg_locks \gset
>
> and then in a DO block loop until
>
>    SELECT DISTINCT virtualtransaction
>      FROM pg_locks
> INTERSECT
>    SELECT (unnest(:'vxids'::text[]));
>
> returns empty; something along those lines.

Isn't it the same though?

I can't think how a transaction wouldn't be holding at least an access
share on pg_class.


pgsql-hackers by date:

Previous
From: Michael Paquier
Date:
Subject: Re: Add more information_schema columns
Next
From: "David G. Johnston"
Date:
Subject: Re: it's a feature, but it feels like a bug