Thread: PL/pgSQL Loop Vs. Batch Update

PL/pgSQL Loop Vs. Batch Update

From
David Wheeler
Date:
Fellow PostgreSQLers,

This post is longish and has a bit of code, but here's my question up-
front: Why are batch queries in my PL/pgSQL functions no faster than
iterating over a loop and executing a series of queries for each
iteration of the loop? Are batch queries or array or series
generation slow in PL/pgSQL?

Now, for the details of my problem. For managing an ordered many-to-
many relations between blog entries and tags, I created these tables:

   CREATE TABLE entry (
     id SERIAL PRIMARY KEY,
     title text,
     content text
   );

   CREATE TABLE tag (
     id SERIAL PRIMARY KEY,
     name text
   );

   CREATE TABLE entry_coll_tag (
     entry_id integer REFERENCES entry(id)
                      ON UPDATE CASCADE
                      ON DELETE CASCADE,
     tag_id   integer REFERENCES tag(id)
                      ON UPDATE CASCADE
                      ON DELETE CASCADE,
     ord      smallint,
     PRIMARY KEY (entry_id, tag_id)
   );

To make it easy to associate an entry with a bunch of tags in a
single query, I wrote this PL/pgSQL function:

   CREATE OR REPLACE FUNCTION entry_coll_tag_set (
       obj_id   integer,
       coll_ids integer[]
   ) RETURNS VOID AS $$
   DECLARE
     -- For checking to see if a tuple was updated.
     update_count smallint;
     -- For looping through the array.
     iloop integer       := 1;
   BEGIN
       -- Lock the containing object tuple to prevernt inserts into the
       -- collection table.
       PERFORM true FROM entry WHERE id = obj_id FOR UPDATE;

       -- Update possible existing record with the current sequence so
       -- as to avoid unique constraint violations. We just set it to a
       -- negative number, since negative numbers are never used for
       -- ord.
       UPDATE entry_coll_tag
       SET    ord = -ord
       WHERE  entry_id = obj_id;

       -- Loop through the tag IDs to associate with the entry ID.
       while coll_ids[iloop] is not null loop
           -- Update possible existing collection record.
           UPDATE entry_coll_tag
           SET    ord = iloop
           WHERE  entry_id = obj_id
                  AND tag_id = coll_ids[iloop];

           -- Only insert the item if nothing was updated.
           IF FOUND IS false THEN
               -- Insert a new record.
               INSERT INTO entry_coll_tag (entry_id, tag_id, ord)
               VALUES (obj_id, coll_ids[iloop], iloop);
           END IF;
           iloop := iloop + 1;
       END loop;

       -- Delete any remaining tuples.
       DELETE FROM entry_coll_tag
       WHERE  entry_id = obj_id AND ord < 0;
   END;
   $$ LANGUAGE plpgsql SECURITY DEFINER;

Josh Berkus kindly reviewed it, and suggested that I take advantage
of generate_series() and batch updates instead of looping through the
results. Here's the revised version:

   CREATE OR REPLACE FUNCTION entry_coll_tag_set (
       obj_id   integer,
       coll_ids integer[]
   ) RETURNS VOID AS $$
   BEGIN
       -- Lock the containing object tuple to prevernt inserts into the
       -- collection table.
       PERFORM true FROM entry WHERE id = obj_id FOR UPDATE;

       -- First set all ords to negative value to prevent unique index
       -- violations.
       UPDATE entry_coll_tag
       SET    ord = -ord
       WHERE  entry_id = obj_id;

       IF FOUND IS false THEN
           -- There are no existing tuples, so just insert the new ones.
           INSERT INTO entry_coll_tag (entry_id, tag_id, ord)
           SELECT obj_id, coll_ids[gs.ser], gs.ser
           FROM   generate_series(1, array_upper(coll_ids, 1))
                  AS gs(ser);
       ELSE
           -- First, update the existing tuples to new ord values.
           UPDATE entry_coll_tag SET ord = ser
           FROM   (
               SELECT gs.ser, coll_ids[gs.ser] as move_tag
               FROM   generate_series(1, array_upper(coll_ids, 1))
                      AS gs(ser)
           ) AS expansion
           WHERE move_tag = entry_coll_tag.tag_id
                 AND entry_id = obj_id;

           -- Now insert the new tuples.
           INSERT INTO entry_coll_tag (entry_id, tag_id, ord )
           SELECT obj_id, coll_ids[gs.ser], gs.ser
           FROM   generate_series(1, array_upper(coll_ids, 1)) AS gs
(ser)
           WHERE  coll_ids[gs.ser] NOT IN (
               SELECT tag_id FROM entry_coll_tag ect2
               WHERE  entry_id = obj_id
           );

           -- Delete any remaining tuples.
           DELETE FROM entry_coll_tag
           WHERE  entry_id = obj_id AND ord < 0;
       END IF;
   END;
   $$ LANGUAGE plpgsql SECURITY DEFINER;

Josh thought that the replacement of the loop with a couple of batch
queries would be an order of magnitude faster. So I happily ran my
benchmark comparing the two approaches. The benchmark actually tests
two different functions that had a similar refactoring, as well as
two other functions that are quite simple. Before the tests, I
inserted 100,000 entry records, and a random number of tag records
(1-10 for each entry) and related entry_coll_tag records, which came
out to around 500,000 entries in entry_coll_tag. I ran VACUUM and
ANALYZE before each test, and got the following results:

  func: 13.67 wallclock seconds (0.11 usr + 1.82 sys = 1.93 CPU) @
21.95/s
func2: 13.68 wallclock seconds (0.10 usr + 1.71 sys = 1.81 CPU) @
21.93/s
  perl: 41.14 wallclock seconds (0.26 usr + 6.94 sys = 7.20 CPU) @
7.29/s

"func" is my original version of the function with the loop, "func2"
is the refactored version with the batch queries, and "perl" is my
attempt to do roughly the same thing in Perl. The benefits of using
the functions over doing the work in app space is obvious, but there
seems to be little difference between the two function approaches. If
you want to see the complete benchmark SQL and test script, it's here:

   http://www.justatheory.com/code/
bench_plpgsql_collection_functions2.tgz

So my question is quite simple: Why was the batch query approach no
faster than the loop approach? I assume that if my tests managed 100
or 1000 rows in the collection table, the batch approach would show
an improvement, but why would it not when I was using it to manage
8-10 rows? Have I missed something here?

Thanks,

David











Re: PL/pgSQL Loop Vs. Batch Update

From
Tom Lane
Date:
David Wheeler <david@kineticode.com> writes:
> This post is longish and has a bit of code, but here's my question up-
> front: Why are batch queries in my PL/pgSQL functions no faster than
> iterating over a loop and executing a series of queries for each
> iteration of the loop?

You'd really have to look at the plans generated for each of the
commands in the functions to be sure.  A knee-jerk reaction is to
suggest that that NOT IN might be the core of the problem, but it's
only a guess.

It's a bit tricky to examine the behavior of a parameterized query,
which is what these will all be since they depend on local variables
of the plpgsql function (which are passed as parameters to the main
SQL executor).  The basic idea is

    PREPARE foo(datatype, datatype, ...) AS SELECT ... $1 ... $2 ...

    EXPLAIN ANALYZE EXECUTE foo(value, value)

            regards, tom lane

Re: PL/pgSQL Loop Vs. Batch Update

From
David Wheeler
Date:
On Apr 25, 2006, at 18:19, Tom Lane wrote:

> You'd really have to look at the plans generated for each of the
> commands in the functions to be sure.  A knee-jerk reaction is to
> suggest that that NOT IN might be the core of the problem, but it's
> only a guess.

Well, the rows are indexed (I forgot to include the indexes in my
first post), and given that each entry_id has no more than ten
associated tag_ids, I would expect it to be quite fast, relying on
the primary key index to look up the entry_id first, and then the
associated tag_ids. But that's just a guess on my part, too. Perhaps
I should try a left outer join with tag_id IS NULL?

> It's a bit tricky to examine the behavior of a parameterized query,
> which is what these will all be since they depend on local variables
> of the plpgsql function (which are passed as parameters to the main
> SQL executor).

Right, that makes sense.

> The basic idea is
>
>     PREPARE foo(datatype, datatype, ...) AS SELECT ... $1 ... $2 ...
>
>     EXPLAIN ANALYZE EXECUTE foo(value, value)

Just on a lark, I tried to get this to work:

try=# explain analyze EXECUTE foo(1, ARRAY
[600001,600002,600003,600004,600005,600006,600007]);
                                       QUERY PLAN
------------------------------------------------------------------------
--------------
Result  (cost=0.00..0.01 rows=1 width=0) (actual time=26.241..26.251
rows=1 loops=1)
Total runtime: 27.512 ms
(2 rows)

That's not much use. Is there no way to EXPLAIN ANALYZE this stuff?

Thanks Tom.

Best,

David


Re: PL/pgSQL Loop Vs. Batch Update

From
Tom Lane
Date:
David Wheeler <david@kineticode.com> writes:
> Just on a lark, I tried to get this to work:

> try=# explain analyze EXECUTE foo(1, ARRAY
> [600001,600002,600003,600004,600005,600006,600007]);
>                                        QUERY PLAN
> ------------------------------------------------------------------------
> --------------
> Result  (cost=0.00..0.01 rows=1 width=0) (actual time=26.241..26.251
> rows=1 loops=1)
> Total runtime: 27.512 ms
> (2 rows)

> That's not much use.

It looks like you had something trivial as the definition of foo().
Try one of the actual queries from the plpgsql function.

            regards, tom lane

Re: PL/pgSQL Loop Vs. Batch Update

From
David Wheeler
Date:
On Apr 25, 2006, at 19:36, Tom Lane wrote:

> It looks like you had something trivial as the definition of foo().

Yeah, the function call. :-)

> Try one of the actual queries from the plpgsql function.

Oh. Duh. Will do. Tomorrow.

Best,

David


Re: PL/pgSQL Loop Vs. Batch Update

From
David Wheeler
Date:
On Apr 25, 2006, at 19:36, Tom Lane wrote:

> Try one of the actual queries from the plpgsql function.

Here we go:

try=# PREPARE foo(int, int[], int) AS
try-# INSERT INTO entry_coll_tag (entry_id, tag_id, ord )
try-# SELECT $1, $2[gs.ser], gs.ser + $3
try-# FROM   generate_series(1, array_upper($2, 1)) AS gs(ser)
try-# WHERE  $2[gs.ser] NOT IN (
try(#     SELECT tag_id FROM entry_coll_tag ect2
try(#     WHERE entry_id = $1
try(# );
PREPARE
try=# explain analyze execute foo(100100, ARRAY
[600001,600002,600003,600004,600005,600006,600007], 0);

QUERY PLAN
------------------------------------------------------------------------
------------------------------------------------------------------------
-
Function Scan on generate_series gs  (cost=7.78..25.28 rows=500
width=4) (actual time=80.982..81.265 rows=7 loops=1)
    Filter: (NOT (hashed subplan))
    SubPlan
      ->  Index Scan using idx_entry_tag_ord on entry_coll_tag ect2
(cost=0.00..7.77 rows=5 width=4) (actual time=80.620..80.620 rows=0
loops=1)
            Index Cond: (entry_id = $1)
Trigger for constraint entry_coll_tag_entry_id_fkey: time=3.210 calls=7
Trigger for constraint entry_coll_tag_tag_id_fkey: time=4.412 calls=7
Total runtime: 158.672 ms
(8 rows)

Actually looks pretty good to me. Although is generate_series() being
rather slow?

Thanks,

David

Re: PL/pgSQL Loop Vs. Batch Update

From
David Wheeler
Date:
On May 2, 2006, at 16:49, David Wheeler wrote:

> On Apr 25, 2006, at 19:36, Tom Lane wrote:
>
>> Try one of the actual queries from the plpgsql function.
>
> Here we go:
>
> try=# PREPARE foo(int, int[], int) AS
> try-# INSERT INTO entry_coll_tag (entry_id, tag_id, ord )
> try-# SELECT $1, $2[gs.ser], gs.ser + $3
> try-# FROM   generate_series(1, array_upper($2, 1)) AS gs(ser)
> try-# WHERE  $2[gs.ser] NOT IN (
> try(#     SELECT tag_id FROM entry_coll_tag ect2
> try(#     WHERE entry_id = $1
> try(# );
> PREPARE
> try=# explain analyze execute foo(100100, ARRAY
> [600001,600002,600003,600004,600005,600006,600007], 0);
>
> QUERY PLAN
> ----------------------------------------------------------------------
> ----------------------------------------------------------------------
> -----
> Function Scan on generate_series gs  (cost=7.78..25.28 rows=500
> width=4) (actual time=80.982..81.265 rows=7 loops=1)
>    Filter: (NOT (hashed subplan))
>    SubPlan
>      ->  Index Scan using idx_entry_tag_ord on entry_coll_tag ect2
> (cost=0.00..7.77 rows=5 width=4) (actual time=80.620..80.620 rows=0
> loops=1)
>            Index Cond: (entry_id = $1)
> Trigger for constraint entry_coll_tag_entry_id_fkey: time=3.210
> calls=7
> Trigger for constraint entry_coll_tag_tag_id_fkey: time=4.412 calls=7
> Total runtime: 158.672 ms
> (8 rows)
>
> Actually looks pretty good to me. Although is generate_series()
> being rather slow?

Scratch that:

try=# delete from entry_coll_tag ;
DELETE 7
try=# vacuum;
analyze;
VACUUM
try=# analyze;
ANALYZE
try=# explain analyze execute foo(100100, ARRAY
[600001,600002,600003,600004,600005,600006,600007], 0);

QUERY PLAN
------------------------------------------------------------------------
-----------------------------------------------------------------------
Function Scan on generate_series gs  (cost=7.78..25.28 rows=500
width=4) (actual time=0.193..0.284 rows=7 loops=1)
    Filter: (NOT (hashed subplan))
    SubPlan
      ->  Index Scan using idx_entry_tag_ord on entry_coll_tag ect2
(cost=0.00..7.77 rows=5 width=4) (actual time=0.022..0.022 rows=0
loops=1)
            Index Cond: (entry_id = $1)
Trigger for constraint entry_coll_tag_entry_id_fkey: time=0.858 calls=7
Trigger for constraint entry_coll_tag_tag_id_fkey: time=0.805 calls=7
Total runtime: 3.266 ms
(8 rows)

try=# delete from entry_coll_tag ;DELETE 7
try=# explain analyze execute foo(100100, ARRAY
[600001,600002,600003,600004,600005,600006,600007], 0);

So my tests are calling this query six hundred times. Could it be
that it just gets slower over time because the database needs to be
vacuumed? Or perhaps pg_autovacuum is kicking in during execution and
*that* slows things down?

Thanks,

David



Re: PL/pgSQL Loop Vs. Batch Update

From
David Wheeler
Date:
On May 2, 2006, at 16:52, David Wheeler wrote:

>> Actually looks pretty good to me. Although is generate_series()
>> being rather slow?
>
> Scratch that:

Bah, dammit, there were no rows in that relevant table. Please
disregard my previous EXPLAIN ANALYZE posts.

I've re-run my script and populated it with 549,815 rows. *Now* let's
see what we've got:


try=# VACUUM;
VACUUM
try=# ANALYZE;
ANALYZE
try=# PREPARE foo(int, int[], int) AS
try-# INSERT INTO entry_coll_tag (entry_id, tag_id, ord )
try-# SELECT $1, $2[gs.ser], gs.ser + $3
try-# FROM   generate_series(1, array_upper($2, 1)) AS gs(ser)
try-# WHERE  $2[gs.ser] NOT IN (
try(#      SELECT tag_id FROM entry_coll_tag ect2
try(#      WHERE entry_id = $1
try(# );
PREPARE
try=# explain analyze execute foo(100100, ARRAY
[600001,600002,600003,600004,600005,600006,600007], 0);

QUERY PLAN
------------------------------------------------------------------------
-----------------------------------------------------------------------
Function Scan on generate_series gs  (cost=9.68..27.18 rows=500
width=4) (actual time=0.965..1.055 rows=7 loops=1)
    Filter: (NOT (hashed subplan))
    SubPlan
      ->  Index Scan using idx_entry_tag_ord on entry_coll_tag ect2
(cost=0.00..9.66 rows=8 width=4) (actual time=0.844..0.844 rows=0
loops=1)
            Index Cond: (entry_id = $1)
Trigger for constraint entry_coll_tag_entry_id_fkey: time=3.872 calls=7
Trigger for constraint entry_coll_tag_tag_id_fkey: time=3.872 calls=7
Total runtime: 12.797 ms
(8 rows)

try=# delete from entry_coll_tag where entry_id = 100100;
DELETE 7
try=# explain analyze execute foo(100100, ARRAY
[600001,600002,600003,600004,600005,600006,600007], 0);

QUERY PLAN
------------------------------------------------------------------------
-----------------------------------------------------------------------
Function Scan on generate_series gs  (cost=9.68..27.18 rows=500
width=4) (actual time=0.117..0.257 rows=7 loops=1)
    Filter: (NOT (hashed subplan))
    SubPlan
      ->  Index Scan using idx_entry_tag_ord on entry_coll_tag ect2
(cost=0.00..9.66 rows=8 width=4) (actual time=0.058..0.058 rows=0
loops=1)
            Index Cond: (entry_id = $1)
Trigger for constraint entry_coll_tag_entry_id_fkey: time=0.542 calls=7
Trigger for constraint entry_coll_tag_tag_id_fkey: time=0.590 calls=7
Total runtime: 2.118 ms
(8 rows)

Damn, that seems pretty efficient. I wonder if it's the other
function, then. I'll have to work EXPLAIN ANALYZEing _it_.

Best,

David