Re: [ADMIN] ERROR: could not read block - Mailing list pgsql-hackers

From Qingqing Zhou
Subject Re: [ADMIN] ERROR: could not read block
Date
Msg-id dltkus$1u7l$1@news.hub.org
Whole thread Raw
In response to Re: [ADMIN] ERROR: could not read block  ("Magnus Hagander" <mha@sollentuna.net>)
List pgsql-hackers
""Magnus Hagander"" <mha@sollentuna.net> wrote
>
> The way I read it, a delay should help. It's basically running out of
> kernel buffers, and we just delay, somebody else (another process, or an
> IRQ handler, or whatever) should get finished with their I/O, free up
> the buffer, and let us have it. Looking around a bit I see several
> references that you should retry on it, but nothing in the API docs.
> I do think it's probably a good idea to do a short delay before retrying
> - at least to yield the CPU for one slice. That would greatly increase
> the probability of someone else finishing their I/O...
>

More I read on the second thread:

" NTBackupread and NTBackupwrite both use buffered I/O. This means that 
Windows NT caches the I/O that is performed against the stream. It is also 
the only API that will back up the metadata of a file. This cache is pulled 
from limited resources: namely, pool and nonpaged pool. Because of this, 
extremely large numbers of files or files that are very large may cause the 
pool resources to run low. "

So does it imply that if we use unbuffered I/O in Windows system will 
elminate this problem? If so, just add FILE_FLAG_NO_BUFFERING when we open 
data file will solve the problem -- but this change in fact very invasive, 
because it will make the strategy of server I/O optimization totally 
different from *nix.

Regards,
Qingqing




pgsql-hackers by date:

Previous
From: Bob Ippolito
Date:
Subject: Re: PostgreSQL 8.1.0 catalog corruption
Next
From: "Jim C. Nasby"
Date:
Subject: Re: Improving count(*)