Thread: Re: problem with large maintenance_work_mem settings and

Re: problem with large maintenance_work_mem settings and

From
"Zeugswetter Andreas DCP SD"
Date:
> > I'll look into it, but I was already wondering if we shouldn't bound

> > the number of tapes somehow.  It's a bit hard to believe that 28000
> > tapes is a sane setting.
>
> Well, since they are not actually tapes, why not?

I wonder what the OS does when we repeatedly open and close those files
because we are short on filedescriptors ? Will it replace cached pages
of a file that we have closed *more* aggressively ?

Maybe we should limit the files to how many files we would actually be
able
to hold open in parallel ? Or keep more that one "tape" in one file
and remember a start offset into the file per tape.

Andreas


Re: problem with large maintenance_work_mem settings and

From
Tom Lane
Date:
"Zeugswetter Andreas DCP SD" <ZeugswetterA@spardat.at> writes:
>>> I'll look into it, but I was already wondering if we shouldn't bound
>>> the number of tapes somehow.  It's a bit hard to believe that 28000 
>>> tapes is a sane setting.
>> 
>> Well, since they are not actually tapes, why not?

> I wonder what the OS does when we repeatedly open and close those files
> because we are short on filedescriptors ?

At the moment, nothing, because all the "tapes" are just I/O buffers on
the same OS-level file (or more accurately, one file per gigabyte of
data).

If we get rid of logtape.c as Luke wants to do, then we might have some
issues here.
        regards, tom lane