On Mon, Nov 17, 2008 at 11:10 PM, Craig Ringer
<craig@postnewspapers.com.au> wrote:
> I also think it's a wee bit of a pity that there's no way to tell Pg
> that a job isn't important, so data shouldn't be permitted to push much
> else out of shared_buffers or the OS's cache. The latter can be ensured
> to an extent, at least on Linux, with posix_fadvise(...,
> POSIX_FADV_NOREUSE) or with madvise(...). The former is presumably
> possible with proper work_mem (etc) settings, but I find it's the OS's
> habit of filling the cache with gigabytes of data I won't need again
> that's the real problem. I don't know how this'd work when interacting
> with other backends doing other work with the same tables, though.
Agreed.
It could be that in the OP's case the data set for the big query is so
big it blows out share_buffers completely / most of the way, and then
I/O for the other data has to hit the drives instead of memory and
that's why they're so slow.