Tom Lane <tgl@sss.pgh.pa.us> writes:
> Ian Lance Taylor <ian@airs.com> writes:
> > Probably true, but on Unix you certainly can't assume that write will
> > set errno if it does not return -1.
>
> Right. The code you propose is isomorphic to what I suggested
> originally. The question is which error condition should we assume
> if errno has not been set; is disk-full sufficiently likely to be the
> cause that we should just say that, or are there plausible alternatives?
Sufficiently likely? Dunno.
I can think of some other possibilities. If the file is on a file
system mounted via NFS or any other remote file system, you might get
any number of errors. If there is a disk error after at least one
disk block has been copied and written, the kernel might return a
short count. If the kernel is severely overloaded, and fails to
allocate a buffer after allocating and writing at least one buffer
successfully, it might return a short count. If the file is very
large, and the write would push it over the maximum file size, you
might get a short count up to the maximum file size. A similar case
might happen if the file is closed to the process resource limit
(RLIMIT_FSIZE). I assume we can rule out cases like a write from a
buffer at the end of user memory such that some data can be copied
into kernel space and then a segmentation violation occurs--on some
systems that could cause a short count if a full block can be written
before the invalid memory is reached.
Obviously a full disk is the most likely case. This is particularly
true if the write is for less than a full disk block. But otherwise I
could believe that at least the disk error case might happen to
somebody someday.
Ian