my understanding of how postgres works may be flawed, but the way I do
understand it is that each process gets its own 'copy' of the database to
work with...
which is why, for example, you wont be able to run a vaccuum if you've got
an ongoing connection to the dbase.
I think.
On Mon, 29 Aug 2005 15:11:14 -0400, Alan Stange <stange@rentec.com> wrote:
> Hello all,
>
> say for example I have a larger table T with 26 millions rows, one
> million associated with each letter of the alphabet.
>
> I have a long running process which does a 'SELECT ID FROM T'. The
> results are being streamed to the client using a fetch size limit. This
> process with take 26 hours to run. It turns out that all the "C" and
> "P" are going to be deleted when the SELECT gets to them.
>
> Several hours into this process, after the "C" rows have been deleted in
> a separate transaction but we haven't yet gotten to the "P" rows, a
> vacuum is begun on table T.
>
>
> What happens?
>
> Will the 1 million "C" rows be freed and made available for reuse or
> will their visibility with the initial SELECT statement cause the vacuum
> to skip over them?
>
> Thanks!
>
> -- Alan
>
> ---------------------------(end of broadcast)---------------------------
> TIP 5: don't forget to increase your free space map settings
--
Oren Mazor // Developer, Sysadmin, Explorer
GPG Key: http://www.grepthemonkey.org/secure
"Ut sementem feceris, ita metes"