Re: VLDB Features - Mailing list pgsql-hackers

From NikhilS
Subject Re: VLDB Features
Date
Msg-id d3c4af540712162335nb2c2310pa045fd5c58a20b84@mail.gmail.com
Whole thread Raw
In response to Re: VLDB Features  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-hackers
Hi,

On Dec 15, 2007 1:14 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
NikhilS <nikkhils@gmail.com> writes:
> Any errors which occur before doing the heap_insert should not require
> any recovery according to me.

A sufficient (though far from all-encompassing) rejoinder to that is
"triggers and CHECK constraints can do anything".

> The overhead of having a subtransaction per row is a very valid concern. But
> instead of using a per insert or a batch insert substraction, I am
> thinking that we can start off a subtraction and continue it till we
> encounter a failure.The moment an error is encountered, since we have the offending >(already in heap) tuple around, we can call a simple_heap_delete on the same and commit >(instead of aborting) this subtransaction

What of failures that occur only at (sub)transaction commit, such as
foreign key checks?

What if we identify and define a subset where we could do subtransactions based COPY? The following could be supported:

* A subset of triggers and CHECK constraints which do not move the tuple around. (Identifying this subset might be an issue though?)
* Primary/unique key indexes

As Hannu mentioned elsewhere in this thread, there should not be very many instances of complex triggers/CHECKs around? And  may be in those instances (and also the foreign key checks case), the behaviour could default to use a per-subtransaction-per-row or even the existing single transaction model?

Regards,
Nikhils
--
EnterpriseDB               http://www.enterprisedb.com

pgsql-hackers by date:

Previous
From: "Gokulakannan Somasundaram"
Date:
Subject: Re: EXPLAIN ANALYZE printing logical and hardware I/O per-node
Next
From: "Gokulakannan Somasundaram"
Date:
Subject: Proposal for Null Bitmap Optimization(for Trailing NULLs)