On 08/05/2016 07:18 PM, Andrew Sullivan wrote:
On Fri, Aug 05, 2016 at 06:02:08PM +0300, Grigory Smolkin wrote:
But it`s temporary table so it`s equal to saying 'I don`t care about this
data' and I can get 'out of disk space' regardless of using temporary
tables.
What are we winning here?
Surely, that the transaction operates in a predictable way? A temp
table doesn't say, "I don't care about this data," it says, "I don't
care about this data over the long haul." I've had lots of data go
through temp tables that I really really wanted to get into some other
place later, and it'd suck if the transaction failed half way through
because it turns out there's nowhere to put the data I've just staged.
A
But in that case you loose your data is case of power outage, deadlock or network problem.
As it seems to me you can either 'care about your data' and use regular tables, protected by wal, or don`t and use temp tables.
What am trying to understand, does temp tables really worth that many disk operations? First we create empty file, then reserve space for it and then we
write data in case of temp_buffers overflow. If there are many temp tables it`s starting to eat a lot of I/O.
Wouldn`t it be more effective to create file for temp table on demand?
I think for most temp tables operations temp_buffers memory will be enough.
--
Grigory Smolkin
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company