Thread: Spec discussion: Generalized Data Queue / Modification Trigger

Spec discussion: Generalized Data Queue / Modification Trigger

From
Josh Berkus
Date:
Per -hackers, we need to have two things for each of our desired features:

(1) a clear specification on how the feature should work, and how it
should be implemented.

(2) specific people to work on the feature.

As such, I'm starting with a feature which isn't the highest priority as
far as votes are concerned, but is *my* highest priority as far as stuff
I'd personally like to use.

Now, I have my idea on how I'd like the Data Queue / Modification
Triggers to work, but I'm not the one who proposed those two closely
related features.  Can the original proposers speak up on what kind of
spec you had in mind?

--Josh Berkus

Re: Spec discussion: Generalized Data Queue / Modification Trigger

From
Josh Berkus
Date:
> Now, I have my idea on how I'd like the Data Queue / Modification
> Triggers to work, but I'm not the one who proposed those two closely
> related features.  Can the original proposers speak up on what kind of
> spec you had in mind?

Since nobody has spoken up about this, I'll spec it myself ...
particularly since I care strongly about it for heterogenous replication
(that is, replication to non-PostgreSQL systems).

As far as I can see, things would be of general utility:

(1) The ability to send asynchronous (or synchronous?) notifications, on
a per-row basis, whenever data is modified *only after commit*.  This
has been generally described as "on-commit triggers", but could actually
take a variety of forms.

(2) A generic yet compact portable serialization format for a row of
PostgreSQL data, just as protobuf, or something internal.

(3) A method of marking DDL changes in the data modification stream.

Of the three above, #2 seems the most optional, and possibly completely
counter-productive.  Even if we decided that a general portable data
format made sense, we could retrofit it later.  So dropping it from the
spec.

The question is, can we satisfy (1) with the new LISTEN/NOTIFY, or is it
inadequate?  I'll also point out that currently DEFFERABLE
CONSTRAINT/TRIGGERS can be perverted to work like an after-commit trigger.

For (3), it kind of depends on another item on the list, the DDL triggers.

Discussion?

--Josh Berkus


Re: Spec discussion: Generalized Data Queue / Modification Trigger

From
Greg Sabino Mullane
Date:
On Wed, Mar 03, 2010 at 09:42:22AM -0800, Josh Berkus wrote:
> (1) The ability to send asynchronous (or synchronous?) notifications, on
> a per-row basis, whenever data is modified *only after commit*.  This
> has been generally described as "on-commit triggers", but could actually
> take a variety of forms.

I'm not sure I like the idea of this. Could be potentially dangerous, as
listen/notify is not treated as a "reliable" process. What's wrong with
the current method, namely having a row trigger update an internal
table, and then a statement level trigger firing off a notify?

> (2) A generic yet compact portable serialization format for a row of
> PostgreSQL data, just as protobuf, or something internal.

Jan has been tinkering with this for some time (granted, in a slightly
different context, but basically something faster and lower level
than COPY, that could perhaps be fed something external).

> (3) A method of marking DDL changes in the data modification stream.

Hmm..can you expand on what you have in mind here? Something more than
just treating the DDL as another item in the (txn ordered) queue?

--
Greg Sabino Mullane greg@endpoint.com
End Point Corporation
PGP Key: 0x14964AC8

Attachment

Re: Spec discussion: Generalized Data Queue / Modification Trigger

From
Josh Berkus
Date:
Greg,

>> (1) The ability to send asynchronous (or synchronous?) notifications, on
>> a per-row basis, whenever data is modified *only after commit*.  This
>> has been generally described as "on-commit triggers", but could actually
>> take a variety of forms.
>
> I'm not sure I like the idea of this. Could be potentially dangerous, as
> listen/notify is not treated as a "reliable" process. What's wrong with
> the current method, namely having a row trigger update an internal
> table, and then a statement level trigger firing off a notify?

Well, the main problem with that is that it doubles the number of writes
you have to do ... or more.  So it's a major efficiency issue.

This isn't as much of a concern for a system like Slony or Londiste
where the replication queue is a table in the database.  But if you
were, say, replicating through ApacheMQ?  Or replicating cached data to
Redis?  Then the whole queue-table, NOTIFY, poll structure is needless
overhead.

>> (3) A method of marking DDL changes in the data modification stream.
>
> Hmm..can you expand on what you have in mind here? Something more than
> just treating the DDL as another item in the (txn ordered) queue?

Yeah, that would be one way to handle it.  Alternately, you could have
the ability to mark rows with a DDL "version".

--Josh Berkus


Re: Spec discussion: Generalized Data Queue / Modification Trigger

From
Hannu Krosing
Date:
On Wed, 2010-03-03 at 11:52 -0800, Josh Berkus wrote:
> Greg,
>
> >> (1) The ability to send asynchronous (or synchronous?) notifications, on
> >> a per-row basis, whenever data is modified *only after commit*.  This
> >> has been generally described as "on-commit triggers", but could actually
> >> take a variety of forms.
> >
> > I'm not sure I like the idea of this. Could be potentially dangerous, as
> > listen/notify is not treated as a "reliable" process. What's wrong with
> > the current method, namely having a row trigger update an internal
> > table, and then a statement level trigger firing off a notify?
>
> Well, the main problem with that is that it doubles the number of writes
> you have to do ... or more.  So it's a major efficiency issue.
>
> This isn't as much of a concern for a system like Slony or Londiste
> where the replication queue is a table in the database.

Yes. For Londiste, in addition to WAL writes, which write bigger chunks
of data, but need the same number of seeks and syncs, only deferred
writes to heap and a single index would be added and even those may
never be actually written to disk if replication is fast enough and the
event tables are rotated faster than background writer and checkpoints
try to write them down.

> But if you
> were, say, replicating through ApacheMQ?  Or replicating cached data to
> Redis?  Then the whole queue-table, NOTIFY, poll structure is needless
> overhead.

I't may seem easy to replace a database table with "something else" for
collecting the changes which have happened during the transaction, but
you have to answer the following questions:

1) do I need persistence, what about 2PC ?

2) does the "something else" work well for all situations an event table
would work (say, for example, a load of 500GB of data in one
transaction)

3) what would I gain in return for all the work needed to implement the
"something else" ?

> >> (3) A method of marking DDL changes in the data modification stream.

Yes, DDL triggers or somesuch would be highly desirable.

> > Hmm..can you expand on what you have in mind here? Something more than
> > just treating the DDL as another item in the (txn ordered) queue?
>
> Yeah, that would be one way to handle it.  Alternately, you could have
> the ability to mark rows with a DDL "version".

But the actual DDL would still need to be transferred, no ?

--
Hannu Krosing   http://www.2ndQuadrant.com
PostgreSQL Scalability and Availability
   Services, Consulting and Training



Re: Spec discussion: Generalized Data Queue / Modification Trigger

From
Josh Berkus
Date:
> I't may seem easy to replace a database table with "something else" for
> collecting the changes which have happened during the transaction, but
> you have to answer the following questions:
>
> 1) do I need persistence, what about 2PC ?
>
> 2) does the "something else" work well for all situations an event table
> would work (say, for example, a load of 500GB of data in one
> transaction)

Those are good questions, and a generic system would need to work for
all three of those requirements.

> 3) what would I gain in return for all the work needed to implement the
> "something else" ?

Speed.  In my test case, which was replicating view snapshots between
PostgreSQL and Redis, the difference between using an event table and
perverting the constrainttriggers to do an after-insert trigger directly
to redis was a speed difference of around 400%, not counting vacuum
overhead.

>>>> (3) A method of marking DDL changes in the data modification stream.
>
> Yes, DDL triggers or somesuch would be highly desirable.
>
>>> Hmm..can you expand on what you have in mind here? Something more than
>>> just treating the DDL as another item in the (txn ordered) queue?
>> Yeah, that would be one way to handle it.  Alternately, you could have
>> the ability to mark rows with a DDL "version".
>
> But the actual DDL would still need to be transferred, no ?

Yes.  It may be that having a ddl change simply inserted into the
replication stream is the way to go.  Alternatively, DDL versioning
makes a certain amount of sense except that it's pretty hard to make
generic, and would require additional catalog tables.

--Josh Berkus


Re: Spec discussion: Generalized Data Queue / Modification Trigger

From
Hannu Krosing
Date:
On Wed, 2010-03-03 at 13:43 -0800, Josh Berkus wrote:
> > I't may seem easy to replace a database table with "something else" for
> > collecting the changes which have happened during the transaction, but
> > you have to answer the following questions:
> >
> > 1) do I need persistence, what about 2PC ?
> >
> > 2) does the "something else" work well for all situations an event table
> > would work (say, for example, a load of 500GB of data in one
> > transaction)
>
> Those are good questions, and a generic system would need to work for
> all three of those requirements.
>
> > 3) what would I gain in return for all the work needed to implement the
> > "something else" ?
>
> Speed.  In my test case, which was replicating view snapshots between
> PostgreSQL and Redis, the difference between using an event table and
> perverting the constrainttriggers to do an after-insert trigger directly
> to redis was a speed difference of around 400%, not counting vacuum
> overhead.

What do you mean by "speed difference" ?

Lag ?

Or the DMS speed the system could keep up with ?

Or something else ?

> >>>> (3) A method of marking DDL changes in the data modification stream.
> >
> > Yes, DDL triggers or somesuch would be highly desirable.
> >
> >>> Hmm..can you expand on what you have in mind here? Something more than
> >>> just treating the DDL as another item in the (txn ordered) queue?
> >> Yeah, that would be one way to handle it.  Alternately, you could have
> >> the ability to mark rows with a DDL "version".
> >
> > But the actual DDL would still need to be transferred, no ?
>
> Yes.  It may be that having a ddl change simply inserted into the
> replication stream is the way to go.  Alternatively, DDL versioning
> makes a certain amount of sense except that it's pretty hard to make
> generic, and would require additional catalog tables.

But what would DDL versioning _gain_ ? I assume that you just have to
stop and wait for the new version of DDL to arrive, once your DML stream
switches to new DDL version ?

--
Hannu Krosing   http://www.2ndQuadrant.com
PostgreSQL Scalability and Availability
   Services, Consulting and Training



Re: Spec discussion: Generalized Data Queue / Modification Trigger

From
Takahiro Itagaki
Date:
Josh Berkus <josh@agliodbs.com> wrote:

> (1) The ability to send asynchronous (or synchronous?) notifications, on
> a per-row basis, whenever data is modified *only after commit*.  This
> has been generally described as "on-commit triggers", but could actually
> take a variety of forms.

> The question is, can we satisfy (1) with the new LISTEN/NOTIFY, or is it
> inadequate?  I'll also point out that currently DEFFERABLE
> CONSTRAINT/TRIGGERS can be perverted to work like an after-commit trigger.

I think Modification Trigger is also used by not only clustering middlewares
but also materialized view and VACUUM FULL CONCURRENTLY, where we can replay
the modifications using the queued items. So, we could separate the trigger
parts into (1.1) store modifications into persistent storage and (1.2) send
the items to another server. IMHO, "on-commit" part is not so important.

We can implement 1.1 with existing triggers like Slony-I and PgQ,
but there is a problem in trigger-based approach that the trigger
needs to be called at the last of a trigger chain. PostgreSQL calls
the trigger chain in alphabetical order, but it we cannot forbid users
to add triggers with bottom names (ex. 'zzz_trigger').

We can develop 1.2 on the top of 1.1. If we try to improve performance
of the queue storage, it would be implemented with global temp tables
for which we don't write any WALs.

Regards,
---
Takahiro Itagaki
NTT Open Source Software Center



Re: Spec discussion: Generalized Data Queue / Modification Trigger

From
Josh Berkus
Date:
All,

I wrote this up on http://wiki.postgresql.org/wiki/ModificationTriggerGDQ

Please expand.

--Josh Berkus