Re: Piggybacking vacuum I/O - Mailing list pgsql-hackers

From Pavan Deolasee
Subject Re: Piggybacking vacuum I/O
Date
Msg-id 2e78013d0701250317u77c15dfdkcf991a84e30b238d@mail.gmail.com
Whole thread Raw
In response to Re: Piggybacking vacuum I/O  (Heikki Linnakangas <heikki@enterprisedb.com>)
Responses Re: Piggybacking vacuum I/O  (Heikki Linnakangas <heikki@enterprisedb.com>)
List pgsql-hackers

On 1/25/07, Heikki Linnakangas <heikki@enterprisedb.com> wrote:
Pavan Deolasee wrote:
>
> Also is it worth optimizing on the total read() system calls which might
> not
> cause physical I/O, but
> still consume CPU ?

I don't think it's worth it, but now that we're talking about it: What
I'd like to do to all the slru files is to replace the custom buffer
management with mmapping the whole file, and letting the OS take care of
it. We would get rid of some guc variables, the OS would tune the amount
of memory used for clog/subtrans dynamically, and we would avoid the
memory copying. And I'd like to do the same for WAL.

Yes, we can do that. One problem though is mmaping wouldn't work when
CLOG file is extended and some of the backends may not see the extended
portion. But may be we can start with a sufficiently large initialized file and
mmap the whole file.

Another simpler solution for VACUUM would be to read the entire CLOG file
in local memory. Most of the transaction status queries can be satisfied from
this local copy and the normal CLOG is consulted only when the status is
unknown (TRANSACTION_STATUS_IN_PROGRESS)

Thanks,
Pavan

--

EnterpriseDB     http://www.enterprisedb.com

pgsql-hackers by date:

Previous
From: "Dawid Kuroczko"
Date:
Subject: Re: tsearch in core patch, for inclusion
Next
From: Hannu Krosing
Date:
Subject: Re: Recursive Queries