Thread: ON DELETE SET NULL clauses do error when more than two columns are referenced to one table

Hello

One question on www.abclinuxu.cz signalise bug in PostgreSQL RI
implementation. Detected on 8.0.x and verified on 8.3.

Regards
Pavel Stehule

CREATE TABLE users (
 id integer NOT NULL,
 name VARCHAR NOT NULL,
 PRIMARY KEY (id)
);

INSERT INTO users VALUES (1, 'Jozko');
INSERT INTO users VALUES (2, 'Ferko');
INSERT INTO users VALUES (3, 'Samko');

CREATE TABLE tasks (
 id integer NOT NULL,
 owner INT REFERENCES  users (id) ON UPDATE CASCADE ON DELETE SET NULL,
 worker INT REFERENCES users (id) ON UPDATE CASCADE ON DELETE SET NULL,
 checked_by INT REFERENCES users (id) ON UPDATE CASCADE ON DELETE SET NULL,
 PRIMARY KEY (id)
);
INSERT INTO tasks VALUES (1,1,NULL,NULL);
INSERT INTO tasks VALUES (2,2,2,NULL);
INSERT INTO tasks VALUES (3,3,3,3);

DELETE FROM users WHERE id = 1; -- works simple
DELETE FROM users WHERE id = 2; -- works ok
DELETE FROM users WHERE id = 3; -- doesn't work, why?

ERROR:  insert or update on table "tasks" violates foreign key
constraint "tasks_checked_by_fkey"
DETAIL:  Key (checked_by)=(3) is not present in table "users".
CONTEXT:  SQL statement "UPDATE ONLY "public"."tasks" SET "worker" =
NULL WHERE $1 OPERATOR(pg_catalog.=) "worker""
Pavel Stehule wrote:
> One question on www.abclinuxu.cz signalise bug in PostgreSQL RI
> implementation. Detected on 8.0.x and verified on 8.3.

What seems to happen under the hood is:

1. The row in users is deleted
2. The setnull trigger on owner is fired, which executes "UPDATE ONLY
users SET owner = NULL WHERE owner = 3"
3. That update fires the update-triggers on tasks-table, to check that
the referenced row exists in users. Because owner is now null, it's not
checked. Worker and checked_by are not checked, because
AfterTriggerSaveEvent sees that those columns were not modified.
4. The setnull trigger on worker is fired, which executes "UPDATE ONLY
users SET worker = NULL where worker = 3".
5. That update again fires the update-triggers on tasks, to check that
the referenced row exists in users. Owner and worker are now null, so
they're not checked. However, checked_by is not null, so a the trigger
to check that the referenced row in the users table exists ("SELECT 1
FROM users WHERE id=3 FOR SHARE"), which fails.

The ON UPDATE trigger is not fired in step 2, because of the
optimization in AfterTriggerSaveEvent to skip UPDATE triggers if the FK
column was not changed. However, we don't do the optimization if the old
tuple was created in the same transaction:

case RI_TRIGGER_FK:
    /*
     * Update on FK table
     *
     * There is one exception when updating FK tables: if the
     * updated row was inserted by our own transaction and the
     * FK is deferred, we still need to fire the trigger. This
     * is because our UPDATE will invalidate the INSERT so the
     * end-of-transaction INSERT RI trigger will not do
     * anything, so we have to do the check for the UPDATE
     * anyway.
     */
    if (HeapTupleHeaderGetXmin(oldtup->t_data) !=
        GetCurrentTransactionId() &&
        RI_FKey_keyequal_upd_fk(trigger, rel, oldtup, newtup))
    {
        continue;
    }
    break;

which is why we do fire the trigger in step 5.

ISTM the real problem is that the RI triggers are fired in wrong order.
The above optimization saves the day in simple scenarios, but it seems
to be an unintended side-effect.

In fact, you can trigger the problem with a child table with just two ON
DELETE SET NULL columns, if you do a dummy update of the child row
before the delete of the parent, in the same transaction:

CREATE TABLE users (
 id integer NOT NULL,
 name VARCHAR NOT NULL,
 PRIMARY KEY (id)
);

INSERT INTO users VALUES (3, 'Samko');

CREATE TABLE tasks (
 id integer NOT NULL,
 a INT REFERENCES  users (id)  ON DELETE SET NULL,
 b INT REFERENCES users (id)  ON DELETE SET NULL
);
INSERT INTO tasks VALUES (3,3,3);

BEGIN;
UPDATE tasks set id=id WHERE id=3;
DELETE FROM users WHERE id = 3; -- doesn't work, why?
COMMIT;

I'm not sure what to do about this. We could change the order the
triggers are fired to breadth-first. If all the setnull triggers were
executed first, there would be no problem. But that seems like a pretty
big change, and I'm afraid it might have other unintended consequences.

Ideas?

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com
"Heikki Linnakangas" <heikki@enterprisedb.com> writes:

> I'm not sure what to do about this. We could change the order the
> triggers are fired to breadth-first. If all the setnull triggers were
> executed first, there would be no problem. But that seems like a pretty
> big change, and I'm afraid it might have other unintended consequences.

We could also check the queued triggers to see if there really is a trigger
which will be invalidated by the second update and therefore needs to be
rechecked (or even just change the tid of the existing queued check). I don't
know what it would take to make that efficient though.

--
  Gregory Stark
  EnterpriseDB          http://www.enterprisedb.com
Heikki Linnakangas <heikki@enterprisedb.com> writes:
> I'm not sure what to do about this. We could change the order the
> triggers are fired to breadth-first. If all the setnull triggers were
> executed first, there would be no problem. But that seems like a pretty
> big change, and I'm afraid it might have other unintended consequences.

I think it's not so much that they should be "breadth first" as that the
updates generated by the triggers shouldn't count as their own
sub-statements.  The trigger events generated by those updates need to
go at the end of the outer statement's trigger queue.  We'll need to
change the API of SPI_execute_snapshot for this, but since that's only
for the use of the RI triggers anyway, it doesn't seem like a problem.

I also notice that only one of the
afterTriggerMarkEvents/afterTriggerInvokeEvents pairs in trigger.c
is coded as a "while" ... they probably all must be if we expect that RI
triggers will generate events at the same trigger level.

            regards, tom lane
Tom Lane wrote:
> Heikki Linnakangas <heikki@enterprisedb.com> writes:
>> I'm not sure what to do about this. We could change the order the
>> triggers are fired to breadth-first. If all the setnull triggers were
>> executed first, there would be no problem. But that seems like a pretty
>> big change, and I'm afraid it might have other unintended consequences.
>
> I think it's not so much that they should be "breadth first" as that the
> updates generated by the triggers shouldn't count as their own
> sub-statements.  The trigger events generated by those updates need to
> go at the end of the outer statement's trigger queue.  We'll need to
> change the API of SPI_execute_snapshot for this, but since that's only
> for the use of the RI triggers anyway, it doesn't seem like a problem.
>
> I also notice that only one of the
> afterTriggerMarkEvents/afterTriggerInvokeEvents pairs in trigger.c
> is coded as a "while" ... they probably all must be if we expect that RI
> triggers will generate events at the same trigger level.

I can take a stab at writing a patch.

Does this need to be backported? It's a bug, but I feel uncomfortable
backporting. I'm afraid it'll break some other corner cases, and it
doesn't seem to be a huge problem in practice since no-one's ran into it
until now.

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com
I wrote:
> Heikki Linnakangas <heikki@enterprisedb.com> writes:
>> I'm not sure what to do about this. We could change the order the
>> triggers are fired to breadth-first. If all the setnull triggers were
>> executed first, there would be no problem. But that seems like a pretty
>> big change, and I'm afraid it might have other unintended consequences.

> I think it's not so much that they should be "breadth first" as that the
> updates generated by the triggers shouldn't count as their own
> sub-statements.  The trigger events generated by those updates need to
> go at the end of the outer statement's trigger queue.

Actually, Heikki's description is isomorphic to mine ...

I coded this up (patch attached, sans regression test additions) and
it seems to work: Pavel's and Heikki's test cases do what is expected.
The regression tests pass modulo this diff:

*** ./expected/foreign_key.out    Tue Jul 17 13:45:28 2007
--- ./results/foreign_key.out    Tue Aug 14 14:39:38 2007
***************
*** 646,652 ****
  UPDATE PKTABLE set ptest2=5 where ptest2=2;
  ERROR:  insert or update on table "fktable" violates foreign key constraint "constrname3"
  DETAIL:  Key (ftest1,ftest2,ftest3)=(1,-1,3) is not present in table "pktable".
- CONTEXT:  SQL statement "UPDATE ONLY "public"."fktable" SET "ftest2" = DEFAULT WHERE $1 OPERATOR(pg_catalog.=)
"ftest1"AND $2 OPERATOR(pg_catalog.=) "ftest2" AND $3 OPERATOR(pg_catalog.=) "ftest3"" 
  -- Try to update something that will set default
  UPDATE PKTABLE set ptest1=0, ptest2=5, ptest3=10 where ptest2=2;
  UPDATE PKTABLE set ptest2=10 where ptest2=4;
--- 646,651 ----

which is happening because the error is no longer detected while we are
inside the first RI trigger's UPDATE command, but only while handling
triggers queued for the outer query.  This is a little bit annoying IMHO
because the connection of the error message to the original query isn't
very obvious, and seeing the RI update query might help make it a bit
clearer.  OTOH the RI update query is ugly enough that I'm not sure the
average user would be helped by the CONTEXT line.  In any case it seems
difficult to avoid this change.

Another problem is that, because the indirectly-fired RI triggers are
ultimately run in a context where their target table is not the direct
target of the current query, trigger.c doesn't have any rangetable entry
to connect them to (this is why the patch has to touch trigger.c, as it
formerly considered that an error case).  This has a performance cost
(extra heap_opens) that is probably usually negligible, but maybe not
always.  Also, as the patch stands EXPLAIN ANALYZE would fail to report
the runtime of such triggers separately --- they'd be included in the
bottom-line total runtime but not listed by themselves.  (In current
code they aren't listed separately either ... but they'd be charged to
the parent trigger that caused them to be fired, which is better than
nothing.)

I am tempted to try to fix these last two problems by having
afterTriggerInvokeEvents() dynamically add tables to the executor's
range table when it's told to fire triggers for tables that aren't
already listed there.  That's getting a bit hairy for a back-patchable
fix though.  I'm thinking of trying that in HEAD but applying this
patch as-is to 8.0 through 8.2.  (Note that 7.x doesn't exhibit the
bug, mainly because it never tried to fire any triggers earlier than
end-of-interactive-command.)

Comments?

            regards, tom lane

Index: src/backend/commands/trigger.c
===================================================================
RCS file: /cvsroot/pgsql/src/backend/commands/trigger.c,v
retrieving revision 1.216
diff -c -r1.216 trigger.c
*** src/backend/commands/trigger.c    17 Jul 2007 17:45:28 -0000    1.216
--- src/backend/commands/trigger.c    14 Aug 2007 18:48:11 -0000
***************
*** 2332,2337 ****
--- 2332,2338 ----
      AfterTriggerEvent event,
                  prev_event;
      MemoryContext per_tuple_context;
+     bool        locally_opened = false;
      Relation    rel = NULL;
      TriggerDesc *trigdesc = NULL;
      FmgrInfo   *finfo = NULL;
***************
*** 2364,2369 ****
--- 2365,2383 ----
               */
              if (rel == NULL || rel->rd_id != event->ate_relid)
              {
+                 if (locally_opened)
+                 {
+                     /* close prior rel if any */
+                     if (rel)
+                         heap_close(rel, NoLock);
+                     if (trigdesc)
+                         FreeTriggerDesc(trigdesc);
+                     if (finfo)
+                         pfree(finfo);
+                     Assert(instr == NULL);        /* never used in this case */
+                 }
+                 locally_opened = true;
+
                  if (estate)
                  {
                      /* Find target relation among estate's result rels */
***************
*** 2375,2402 ****
                      while (nr > 0)
                      {
                          if (rInfo->ri_RelationDesc->rd_id == event->ate_relid)
                              break;
                          rInfo++;
                          nr--;
                      }
-                     if (nr <= 0)    /* should not happen */
-                         elog(ERROR, "could not find relation %u among query result relations",
-                              event->ate_relid);
-                     rel = rInfo->ri_RelationDesc;
-                     trigdesc = rInfo->ri_TrigDesc;
-                     finfo = rInfo->ri_TrigFunctions;
-                     instr = rInfo->ri_TrigInstrument;
                  }
!                 else
                  {
!                     /* Hard way: we manage the resources for ourselves */
!                     if (rel)
!                         heap_close(rel, NoLock);
!                     if (trigdesc)
!                         FreeTriggerDesc(trigdesc);
!                     if (finfo)
!                         pfree(finfo);
!                     Assert(instr == NULL);        /* never used in this case */

                      /*
                       * We assume that an appropriate lock is still held by the
--- 2389,2410 ----
                      while (nr > 0)
                      {
                          if (rInfo->ri_RelationDesc->rd_id == event->ate_relid)
+                         {
+                             rel = rInfo->ri_RelationDesc;
+                             trigdesc = rInfo->ri_TrigDesc;
+                             finfo = rInfo->ri_TrigFunctions;
+                             instr = rInfo->ri_TrigInstrument;
+                             locally_opened = false;
                              break;
+                         }
                          rInfo++;
                          nr--;
                      }
                  }
!
!                 if (locally_opened)
                  {
!                     /* Hard way: open target relation for ourselves */

                      /*
                       * We assume that an appropriate lock is still held by the
***************
*** 2421,2426 ****
--- 2429,2435 ----
                          palloc0(trigdesc->numtriggers * sizeof(FmgrInfo));

                      /* Never any EXPLAIN info in this case */
+                     instr = NULL;
                  }
              }

***************
*** 2471,2477 ****
      events->tail = prev_event;

      /* Release working resources */
!     if (!estate)
      {
          if (rel)
              heap_close(rel, NoLock);
--- 2480,2486 ----
      events->tail = prev_event;

      /* Release working resources */
!     if (locally_opened)
      {
          if (rel)
              heap_close(rel, NoLock);
***************
*** 2600,2610 ****
       * IMMEDIATE: all events we have decided to defer will be available for it
       * to fire.
       *
       * If we find no firable events, we don't have to increment
       * firing_counter.
       */
      events = &afterTriggers->query_stack[afterTriggers->query_depth];
!     if (afterTriggerMarkEvents(events, &afterTriggers->events, true))
      {
          CommandId    firing_id = afterTriggers->firing_counter++;

--- 2609,2621 ----
       * IMMEDIATE: all events we have decided to defer will be available for it
       * to fire.
       *
+      * We loop in case a trigger queues more events.
+      *
       * If we find no firable events, we don't have to increment
       * firing_counter.
       */
      events = &afterTriggers->query_stack[afterTriggers->query_depth];
!     while (afterTriggerMarkEvents(events, &afterTriggers->events, true))
      {
          CommandId    firing_id = afterTriggers->firing_counter++;

***************
*** 2648,2654 ****
          ActiveSnapshot = CopySnapshot(GetTransactionSnapshot());

      /*
!      * Run all the remaining triggers.    Loop until they are all gone, just in
       * case some trigger queues more for us to do.
       */
      while (afterTriggerMarkEvents(events, NULL, false))
--- 2659,2665 ----
          ActiveSnapshot = CopySnapshot(GetTransactionSnapshot());

      /*
!      * Run all the remaining triggers.    Loop until they are all gone, in
       * case some trigger queues more for us to do.
       */
      while (afterTriggerMarkEvents(events, NULL, false))
***************
*** 3211,3217 ****
      {
          AfterTriggerEventList *events = &afterTriggers->events;

!         if (afterTriggerMarkEvents(events, NULL, true))
          {
              CommandId    firing_id = afterTriggers->firing_counter++;

--- 3222,3228 ----
      {
          AfterTriggerEventList *events = &afterTriggers->events;

!         while (afterTriggerMarkEvents(events, NULL, true))
          {
              CommandId    firing_id = afterTriggers->firing_counter++;

Index: src/backend/executor/spi.c
===================================================================
RCS file: /cvsroot/pgsql/src/backend/executor/spi.c,v
retrieving revision 1.179
diff -c -r1.179 spi.c
*** src/backend/executor/spi.c    27 Apr 2007 22:05:47 -0000    1.179
--- src/backend/executor/spi.c    14 Aug 2007 18:48:11 -0000
***************
*** 39,47 ****
  static int _SPI_execute_plan(SPIPlanPtr plan,
                    Datum *Values, const char *Nulls,
                    Snapshot snapshot, Snapshot crosscheck_snapshot,
!                   bool read_only, long tcount);

! static int    _SPI_pquery(QueryDesc *queryDesc, long tcount);

  static void _SPI_error_callback(void *arg);

--- 39,47 ----
  static int _SPI_execute_plan(SPIPlanPtr plan,
                    Datum *Values, const char *Nulls,
                    Snapshot snapshot, Snapshot crosscheck_snapshot,
!                   bool read_only, bool fire_triggers, long tcount);

! static int    _SPI_pquery(QueryDesc *queryDesc, bool fire_triggers, long tcount);

  static void _SPI_error_callback(void *arg);

***************
*** 316,322 ****

      res = _SPI_execute_plan(&plan, NULL, NULL,
                              InvalidSnapshot, InvalidSnapshot,
!                             read_only, tcount);

      _SPI_end_call(true);
      return res;
--- 316,322 ----

      res = _SPI_execute_plan(&plan, NULL, NULL,
                              InvalidSnapshot, InvalidSnapshot,
!                             read_only, true, tcount);

      _SPI_end_call(true);
      return res;
***************
*** 349,355 ****
      res = _SPI_execute_plan(plan,
                              Values, Nulls,
                              InvalidSnapshot, InvalidSnapshot,
!                             read_only, tcount);

      _SPI_end_call(true);
      return res;
--- 349,355 ----
      res = _SPI_execute_plan(plan,
                              Values, Nulls,
                              InvalidSnapshot, InvalidSnapshot,
!                             read_only, true, tcount);

      _SPI_end_call(true);
      return res;
***************
*** 364,372 ****

  /*
   * SPI_execute_snapshot -- identical to SPI_execute_plan, except that we allow
!  * the caller to specify exactly which snapshots to use.  This is currently
!  * not documented in spi.sgml because it is only intended for use by RI
!  * triggers.
   *
   * Passing snapshot == InvalidSnapshot will select the normal behavior of
   * fetching a new snapshot for each query.
--- 364,375 ----

  /*
   * SPI_execute_snapshot -- identical to SPI_execute_plan, except that we allow
!  * the caller to specify exactly which snapshots to use.  Also, the caller
!  * may specify that AFTER triggers should be queued as part of the outer
!  * query rather than being fired immediately at the end of the command.
!  *
!  * This is currently not documented in spi.sgml because it is only intended
!  * for use by RI triggers.
   *
   * Passing snapshot == InvalidSnapshot will select the normal behavior of
   * fetching a new snapshot for each query.
***************
*** 375,381 ****
  SPI_execute_snapshot(SPIPlanPtr plan,
                       Datum *Values, const char *Nulls,
                       Snapshot snapshot, Snapshot crosscheck_snapshot,
!                      bool read_only, long tcount)
  {
      int            res;

--- 378,384 ----
  SPI_execute_snapshot(SPIPlanPtr plan,
                       Datum *Values, const char *Nulls,
                       Snapshot snapshot, Snapshot crosscheck_snapshot,
!                      bool read_only, bool fire_triggers, long tcount)
  {
      int            res;

***************
*** 392,398 ****
      res = _SPI_execute_plan(plan,
                              Values, Nulls,
                              snapshot, crosscheck_snapshot,
!                             read_only, tcount);

      _SPI_end_call(true);
      return res;
--- 395,401 ----
      res = _SPI_execute_plan(plan,
                              Values, Nulls,
                              snapshot, crosscheck_snapshot,
!                             read_only, fire_triggers, tcount);

      _SPI_end_call(true);
      return res;
***************
*** 1428,1439 ****
   *        behavior of taking a new snapshot for each query.
   * crosscheck_snapshot: for RI use, all others pass InvalidSnapshot
   * read_only: TRUE for read-only execution (no CommandCounterIncrement)
   * tcount: execution tuple-count limit, or 0 for none
   */
  static int
  _SPI_execute_plan(SPIPlanPtr plan, Datum *Values, const char *Nulls,
                    Snapshot snapshot, Snapshot crosscheck_snapshot,
!                   bool read_only, long tcount)
  {
      volatile int my_res = 0;
      volatile uint32 my_processed = 0;
--- 1431,1444 ----
   *        behavior of taking a new snapshot for each query.
   * crosscheck_snapshot: for RI use, all others pass InvalidSnapshot
   * read_only: TRUE for read-only execution (no CommandCounterIncrement)
+  * fire_triggers: TRUE to fire AFTER triggers at end of query (normal case);
+  *        FALSE means any AFTER triggers are postponed to end of outer query
   * tcount: execution tuple-count limit, or 0 for none
   */
  static int
  _SPI_execute_plan(SPIPlanPtr plan, Datum *Values, const char *Nulls,
                    Snapshot snapshot, Snapshot crosscheck_snapshot,
!                   bool read_only, bool fire_triggers, long tcount)
  {
      volatile int my_res = 0;
      volatile uint32 my_processed = 0;
***************
*** 1589,1595 ****
                                              crosscheck_snapshot,
                                              dest,
                                              paramLI, false);
!                     res = _SPI_pquery(qdesc, canSetTag ? tcount : 0);
                      FreeQueryDesc(qdesc);
                  }
                  else
--- 1594,1601 ----
                                              crosscheck_snapshot,
                                              dest,
                                              paramLI, false);
!                     res = _SPI_pquery(qdesc, fire_triggers,
!                                       canSetTag ? tcount : 0);
                      FreeQueryDesc(qdesc);
                  }
                  else
***************
*** 1680,1686 ****
  }

  static int
! _SPI_pquery(QueryDesc *queryDesc, long tcount)
  {
      int            operation = queryDesc->operation;
      int            res;
--- 1686,1692 ----
  }

  static int
! _SPI_pquery(QueryDesc *queryDesc, bool fire_triggers, long tcount)
  {
      int            operation = queryDesc->operation;
      int            res;
***************
*** 1726,1732 ****
          ResetUsage();
  #endif

!     AfterTriggerBeginQuery();

      ExecutorStart(queryDesc, 0);

--- 1732,1739 ----
          ResetUsage();
  #endif

!     if (fire_triggers)
!         AfterTriggerBeginQuery();

      ExecutorStart(queryDesc, 0);

***************
*** 1743,1749 ****
      }

      /* Take care of any queued AFTER triggers */
!     AfterTriggerEndQuery(queryDesc->estate);

      ExecutorEnd(queryDesc);

--- 1750,1757 ----
      }

      /* Take care of any queued AFTER triggers */
!     if (fire_triggers)
!         AfterTriggerEndQuery(queryDesc->estate);

      ExecutorEnd(queryDesc);

Index: src/backend/utils/adt/ri_triggers.c
===================================================================
RCS file: /cvsroot/pgsql/src/backend/utils/adt/ri_triggers.c,v
retrieving revision 1.95
diff -c -r1.95 ri_triggers.c
*** src/backend/utils/adt/ri_triggers.c    5 Jun 2007 21:31:06 -0000    1.95
--- src/backend/utils/adt/ri_triggers.c    14 Aug 2007 18:48:12 -0000
***************
*** 2774,2780 ****
                                        NULL, NULL,
                                        CopySnapshot(GetLatestSnapshot()),
                                        InvalidSnapshot,
!                                       true, 1);

      /* Check result */
      if (spi_result != SPI_OK_SELECT)
--- 2774,2780 ----
                                        NULL, NULL,
                                        CopySnapshot(GetLatestSnapshot()),
                                        InvalidSnapshot,
!                                       true, false, 1);

      /* Check result */
      if (spi_result != SPI_OK_SELECT)
***************
*** 3308,3314 ****
      spi_result = SPI_execute_snapshot(qplan,
                                        vals, nulls,
                                        test_snapshot, crosscheck_snapshot,
!                                       false, limit);

      /* Restore UID */
      SetUserId(save_uid);
--- 3308,3314 ----
      spi_result = SPI_execute_snapshot(qplan,
                                        vals, nulls,
                                        test_snapshot, crosscheck_snapshot,
!                                       false, false, limit);

      /* Restore UID */
      SetUserId(save_uid);
Index: src/include/executor/spi.h
===================================================================
RCS file: /cvsroot/pgsql/src/include/executor/spi.h,v
retrieving revision 1.62
diff -c -r1.62 spi.h
*** src/include/executor/spi.h    25 Jul 2007 12:22:53 -0000    1.62
--- src/include/executor/spi.h    14 Aug 2007 18:48:12 -0000
***************
*** 104,110 ****
                       Datum *Values, const char *Nulls,
                       Snapshot snapshot,
                       Snapshot crosscheck_snapshot,
!                      bool read_only, long tcount);
  extern SPIPlanPtr SPI_prepare(const char *src, int nargs, Oid *argtypes);
  extern SPIPlanPtr SPI_prepare_cursor(const char *src, int nargs, Oid *argtypes,
                                       int cursorOptions);
--- 104,110 ----
                       Datum *Values, const char *Nulls,
                       Snapshot snapshot,
                       Snapshot crosscheck_snapshot,
!                      bool read_only, bool fire_triggers, long tcount);
  extern SPIPlanPtr SPI_prepare(const char *src, int nargs, Oid *argtypes);
  extern SPIPlanPtr SPI_prepare_cursor(const char *src, int nargs, Oid *argtypes,
                                       int cursorOptions);

Heikki Linnakangas <heikki@enterprisedb.com> writes:
> I can take a stab at writing a patch.

I beat you to it, barely.

> Does this need to be backported?

I think it does, seeing that it's a regression from 7.4.

            regards, tom lane