Thread: Re: Using Expanded Objects other than Arrays from plpgsql

Re: Using Expanded Objects other than Arrays from plpgsql

From
Michel Pelletier
Date:
On Sun, Oct 20, 2024 at 8:46 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Michel Pelletier <pelletier.michel@gmail.com> writes:
> On Sun, Oct 20, 2024 at 10:13 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:

 
>> But it seems like we could get an easy win by adjusting
>> plpgsql_exec_function along the lines of
>> ...

> I tried this change and couldn't get it to work, on the next line:
>     if (VARATT_IS_EXTERNAL_EXPANDED_RW(DatumGetPointer(var->value)))
> var->value might not be a pointer, as it seems at least from my gdb
> scratching, but say an integer.  This segfaults on non-array but
> non-expandable datum.

  We need the same test that exec_assign_value makes,
!var->datatype->typbyval, before it's safe to apply DatumGetPointer.
So line 549 needs to be more like

-                    if (!var->isnull && var->datatype->typisarray)
+                    if (!var->isnull && !var->datatype->typbyval)

> Another comment that caught my eye was this one:
> https://github.com/postgres/postgres/blob/master/src/pl/plpgsql/src/pl_exec.c#L8304
> Not sure what the implication is there.

Yeah, that's some more unfinished business.  I'm not sure if it
matters to your use-case or not.

BTW, we probably should move this thread to pgsql-hackers.

And here we are, thanks for your help on this Tom.  For some thread switching context for others, I'm writing a postgres extension that wraps the SuiteSparse:GraphBLAS API and provides new types for sparse and dense matrices and vectors.  It's like a combination of numpy and scipy.sparse but for Postgres with an emphasis on graph analytics as sparse adjacency matrices using linear algebra.

I use the expandeddatum API to flatten and expand on disk compressed representations of these objects into "live" in-memory objects managed by SuiteSparse.  All GraphBLAS objects are opaque handles, and my expanded objects are essentially a box around this handle.  I use memory context callbacks to free the handles when the memory context of the box is freed.  This works very well and I've made a lot of progress on creating a very clean algebraic API, here are the doctests for matrices, this is all generated from live code!


Doing some benchmarking I noticed that when writing some simple graph algorithms in plpgsql, my objects were being constantly flattened and expanded.  Posting my question above to pgsql-general Tom gave me some tips and suggested I move the thread here.  

My non-expert summary: plpgsql only optimizes for expanded objects if they are arrays.  Non array expanded objects get flattened/expanded on every use.  For large matrices and vectors this is very expensive.  Ideally I'd like to expand my object, use it throughout the function, return it to another function that may use it, and only flatten it when it goes to disk or it's completely unavoidable.  The comment in expandeddatum.h really kind of sells this as one of the main features:

 * An expanded object is meant to survive across multiple operations, but
 * not to be enormously long-lived; for example it might be a local variable
 * in a PL/pgSQL procedure.  So its extra bulk compared to the on-disk format
 * is a worthwhile trade-off.

In my case it's not a question of saving bulk, the on disk representation of a matrix is not useful at compute time, it needs to be expanded (using GraphBLAS's serialize/deserialize API) for it to be usable for matrix operations like matmul.  In most cases algorithms using these objects iterate in a loop, doing various algebraic operations almost always involving a matmul until they converge on a stable solution or they exhaust the input elements.  Here for example is a "minimum parent BFS" that takes a graph and a starting node, and traverses the graph breadth first, computing a vector of every node and its minimum parent id.

CREATE OR REPLACE FUNCTION bfs(graph matrix, start_node bigint)
    RETURNS vector LANGUAGE plpgsql AS
    $$
    DECLARE
    bfs_vector vector = vector('int32');
    next_vector vector = vector('int32');
    BEGIN
        bfs_vector = set_element(bfs_vector, start_node, 1);
        WHILE sssp_vector != next_vector LOOP
            next_vector = dup(bfs_vector);
            bfs_vector = vxm(bfs_vector, graph, 'any_secondi_int32',
                             w=>bfs_vector, accum=>'min_int32');
        END LOOP;
    RETURN bfs_vector;
    end;
    $$;

(If you're wondering "Why would anyone do it this way" it's because SuiteSparse is optimized for parallel sparse matrix multiplication and has a JIT compiler that can target multiple architectures, at the moment CPUs and CUDA GPUs.  Reusing the same Linear Algebra already prevalent in graph theory means not having to think about any low level implementation issues and having code that is completely portable from CPU to GPU or other accelerators).

So, I made the two small changes Tom suggested above and I have them in a side fork here:


Good news, my code still works, but bad news is there is still a lot of flattening/expanding/freeing going on at multiple points in each iteration of the algorithm.  I'll note too that:

    BEGIN
        bfs_vector = set_element(sssp_vector, start_node, 1);

I'd prefer that to not be an assignment, set_element mutates the object (I eventually plan to support subscripting syntax like bfs_vector[start_node] = 1)

same with:

            bfs_vector = vxm(bfs_vector, graph, 'any_secondi_int32',
                             w=>bfs_vector, accum=>'min_int32');

This matmul mutates bfs_vector, I shouldn't need to reassign it back but at the moment it seems necessary otherwise the mutations are lost but this costs a full flatten/expand cycle.

Short term my goal is to optimize plpgsql so that my objects stay expanded for the life of the function.  Long term there's some "unfinished business" to use Tom's words around the expandeddatum API.  I'm not really qualified to speak on these issue but this is my understanding of some of them:

  - plpgsql knows how to expand arrays and is hardwired for it, but how would it know how to expand other expandable types?

  - Issues with exec_check_rw_parameter also being hardwired to only optimize expanded objects for array append and prepend, I suspect this has something to do with my issue above about mutating objects in place.

I may have missed something but hopefully that brings anyone up to speed interested in this topic.

-Michel

Re: Using Expanded Objects other than Arrays from plpgsql

From
Tom Lane
Date:
Michel Pelletier <pelletier.michel@gmail.com> writes:
> Doing some benchmarking I noticed that when writing some simple graph
> algorithms in plpgsql, my objects were being constantly flattened and
> expanded.  Posting my question above to pgsql-general Tom gave me some tips
> and suggested I move the thread here.

> My non-expert summary: plpgsql only optimizes for expanded objects if they
> are arrays.  Non array expanded objects get flattened/expanded on every
> use.  For large matrices and vectors this is very expensive.  Ideally I'd
> like to expand my object, use it throughout the function, return it to
> another function that may use it, and only flatten it when it goes to disk
> or it's completely unavoidable.

Right.  My recollection is that when we first invented expanded
objects, we only implemented expanded arrays, and so it seemed
premature to try to define generalized APIs.  So that went on the back
burner until we got some more cases to look at.  But now we also have
expanded records, and with your use-case as an example of an extension
trying to do similar things, I feel like we have enough examples to
move forward.

As far as the hack we were discussing upthread goes, I realized that
it should test for typlen == -1 not just !typbyval, since the
VARATT_IS_EXTERNAL_EXPANDED_RW test requires that there be a length
word.  With that fix and some comment updates, it looks like the
attached.  I'm inclined to just go ahead and push that.  It won't move
the needle hugely far, but it will help, and it seems simple and safe.

To make more progress, there are basically two areas that we need
to look at:

1. exec_assign_value() has hard-wired knowledge that it's a good idea
to expand an array that's being assigned to a plpgsql variable, and
likewise hard-wired knowledge that calling expand_array() is how to
do that.  The bit in plpgsql_exec_function() that we're patching
in the attached is the same thing, but for the initial assignment of
a function input parameter to a plpgsql variable.  At the time this
was written I was quite unsure that forcible expansion would be a net
win, but the code is nine years old now and there's been few or no
performance complaints.  So maybe it's okay to decide that "always
expand expandable types during assignment" is a suitable policy across
the board, and we don't need to figure out a smarter rule.  It sounds
like that'd probably be a win for your application, which gives me
more faith in the idea than I would've had before.  That leaves the
problem of how to find the right code to call to do the expansion.
Clearly, we need to attach something to data types to make that
possible.  My first thought was to invent a pg_type column (and
CREATE TYPE/ALTER TYPE options, and pg_dump support, yadda yadda)
containing the OID of a function corresponding to expand_array().
However, that work would have to be done again the next time we
think of something we'd like to know about a data type.  So now
I'm thinking that we should steal ideas from the "prosupport" API
(see src/include/nodes/supportnodes.h) and invent a concept of a
"type support" function that can handle an extensible set of
different requests.  The first one would be to pass back the
address of an expansion function comparable to expand_array(),
if the type supports being converted to an expanded object.

2. Most of the performance gold is hidden in deciding when we
can optimize operations on expanded objects that look like

    plpgsql_var := f(plpgsql_var, other-parameters);

by passing a R/W rather than R/O expanded pointer to f() and letting
it munge the expanded object in-place.  If we fail to do that then
f() has to construct a new expanded object for its result.  It's
probably still better off getting a R/O pointer than a flat object,
but nonetheless we fail to avoid a lot of data copying.

The difficulty here is that we do not want to change the normal naive
semantics of such an assignment, in particular "if f() throws an error
then the value of plpgsql_var has not been modified".  This means that
we can only optimize when the R/W parameter is to be passed to the
top-level function of the expression (else, some function called later
could throw an error and ruin things).  Furthermore, we need a
guarantee from f() that it will not throw an error after modifying
the value.

As things stand, plpgsql has hard-wired knowledge that
array_subscript_handler(), array_append(), and array_prepend()
are safe in that way, but it knows nothing about anything else.
So one route to making things better seems fairly clear: invent a new
prosupport request that asks whether the function is prepared to make
such a guarantee.  I wonder though if you can guarantee any such thing
for your functions, when you are relying on a library that's probably
not designed with such a restriction in mind.  If this won't work then
we need a new concept.

One idea I was toying with is that it doesn't matter if f()
throws an error so long as the plpgsql function is not executing
within an exception block: if the error propagates out of the plpgsql
function then we no longer care about the value of the variable.
That would very substantially weaken the requirements on how f()
is implemented.  I'm not sure that we could make this work across
multiple levels of plpgsql functions (that is, if the value of the
variable ultimately resides in some outer function) but right now
that's not an issue since no plpgsql function as such would ever
promise to be safe, thus it would never receive a R/W pointer to
some outer function's variable.

> same with:
>             bfs_vector = vxm(bfs_vector, graph, 'any_secondi_int32',
>                              w=>bfs_vector, accum=>'min_int32');
> This matmul mutates bfs_vector, I shouldn't need to reassign it back but at
> the moment it seems necessary otherwise the mutations are lost but this
> costs a full flatten/expand cycle.

I'm not hugely excited about that.  We could imagine extending
this concept to INOUT parameters of procedures, but it doesn't
seem like that buys anything except a small notational savings.
Maybe it would work to allow multiple parameters of a procedure
to be passed as R/W, whereas we're restricted to one for the
function-notation method.  So possibly there's a gain there but
I'm not sure how big.

BTW, if I understand your example correctly then bfs_vector is
being passed to vxm() twice.  This brings up an interesting
point: if we pass the first instance as R/W, and vxm() manipulates it,
then the changes would also be visible in its other parameter "w".
This is certainly not per normal semantics.  A "safe" function would
have to either not have any possibility that two parameters could
refer to the same object, or be coded in a way that made it impervious
to this issue --- in your example, it couldn't look at "w" anymore
once it'd started modifying the first parameter.  Is that an okay
requirement, and if not what shall we do about it?

Anyway that's my brain dump for today.  Thoughts?

            regards, tom lane

diff --git a/src/pl/plpgsql/src/pl_exec.c b/src/pl/plpgsql/src/pl_exec.c
index ea9740e3f8..80b8654fc8 100644
--- a/src/pl/plpgsql/src/pl_exec.c
+++ b/src/pl/plpgsql/src/pl_exec.c
@@ -532,21 +532,22 @@ plpgsql_exec_function(PLpgSQL_function *func, FunctionCallInfo fcinfo,
                                       false);

                     /*
-                     * Force any array-valued parameter to be stored in
-                     * expanded form in our local variable, in hopes of
-                     * improving efficiency of uses of the variable.  (This is
-                     * a hack, really: why only arrays? Need more thought
-                     * about which cases are likely to win.  See also
-                     * typisarray-specific heuristic in exec_assign_value.)
-                     *
-                     * Special cases: If passed a R/W expanded pointer, assume
+                     * If it's a pass-by-reference type, check to see if we
+                     * received a R/W expanded-object pointer.  If so, assume
                      * we can commandeer the object rather than having to copy
                      * it.  If passed a R/O expanded pointer, just keep it as
                      * the value of the variable for the moment.  (We'll force
                      * it to R/W if the variable gets modified, but that may
                      * very well never happen.)
+                     *
+                     * Also, force any flat array value to be stored in
+                     * expanded form in our local variable, in hopes of
+                     * improving efficiency of uses of the variable.  (This is
+                     * a hack, really: why only arrays? Need more thought
+                     * about which cases are likely to win.  See also
+                     * typisarray-specific heuristic in exec_assign_value.)
                      */
-                    if (!var->isnull && var->datatype->typisarray)
+                    if (!var->isnull && var->datatype->typlen == -1)
                     {
                         if (VARATT_IS_EXTERNAL_EXPANDED_RW(DatumGetPointer(var->value)))
                         {
@@ -561,7 +562,7 @@ plpgsql_exec_function(PLpgSQL_function *func, FunctionCallInfo fcinfo,
                         {
                             /* R/O pointer, keep it as-is until assigned to */
                         }
-                        else
+                        else if (var->datatype->typisarray)
                         {
                             /* flat array, so force to expanded form */
                             assign_simple_var(&estate, var,

Re: Using Expanded Objects other than Arrays from plpgsql

From
Michel Pelletier
Date:
On Tue, Oct 22, 2024 at 12:33 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Michel Pelletier <pelletier.michel@gmail.com> writes:
 
But now we also have
expanded records, and with your use-case as an example of an extension
trying to do similar things, I feel like we have enough examples to
move forward.

Great! 

As far as the hack we were discussing upthread goes, I realized that
it should test for typlen == -1 not just !typbyval, since the
VARATT_IS_EXTERNAL_EXPANDED_RW test requires that there be a length
word.  With that fix and some comment updates, it looks like the
attached.  I'm inclined to just go ahead and push that.  It won't move
the needle hugely far, but it will help, and it seems simple and safe.

I made those changes and my code works a bit faster, it looks like it takes a couple of the top level expansions out.  I'll have more data in the morning.

To make more progress, there are basically two areas that we need
to look at:

1. exec_assign_value() has hard-wired knowledge that it's a good idea
to expand an array that's being assigned to a plpgsql variable, and
likewise hard-wired knowledge that calling expand_array() is how to
do that.  The bit in plpgsql_exec_function() that we're patching
in the attached is the same thing, but for the initial assignment of
a function input parameter to a plpgsql variable.  At the time this
was written I was quite unsure that forcible expansion would be a net
win, but the code is nine years old now and there's been few or no
performance complaints.  So maybe it's okay to decide that "always
expand expandable types during assignment" is a suitable policy across
the board, and we don't need to figure out a smarter rule.  It sounds
like that'd probably be a win for your application, which gives me
more faith in the idea than I would've had before.

Definitely a win, as the flattened format of my objects don't have any run time use, so there is no chance for net loss for me.  I guess I'm using this feature differently from how arrays are, which have a usable flattened format so there is a need to weigh a trade off with expanding or not.   In my case, only the expanded version is useful, and serializing the flat version is expensive.  Formalizing something like expand_array would work well for me, as my expand_matrix function has the identical function signature and serves the exact same purpose.

   So now
I'm thinking that we should steal ideas from the "prosupport" API
(see src/include/nodes/supportnodes.h) and invent a concept of a
"type support" function that can handle an extensible set of
different requests.  The first one would be to pass back the
address of an expansion function comparable to expand_array(),
if the type supports being converted to an expanded object.

I'll look into the support code, I haven't seen that before.
 
2. Most of the performance gold is hidden in deciding when we
can optimize operations on expanded objects that look like

        plpgsql_var := f(plpgsql_var, other-parameters);

by passing a R/W rather than R/O expanded pointer to f() and letting
it munge the expanded object in-place.  If we fail to do that then
f() has to construct a new expanded object for its result.  It's
probably still better off getting a R/O pointer than a flat object,
but nonetheless we fail to avoid a lot of data copying.

The difficulty here is that we do not want to change the normal naive
semantics of such an assignment, in particular "if f() throws an error
then the value of plpgsql_var has not been modified".  This means that
we can only optimize when the R/W parameter is to be passed to the
top-level function of the expression (else, some function called later
could throw an error and ruin things).  Furthermore, we need a
guarantee from f() that it will not throw an error after modifying
the value. 

As things stand, plpgsql has hard-wired knowledge that
array_subscript_handler(), array_append(), and array_prepend()
are safe in that way, but it knows nothing about anything else.
So one route to making things better seems fairly clear: invent a new
prosupport request that asks whether the function is prepared to make
such a guarantee.  I wonder though if you can guarantee any such thing
for your functions, when you are relying on a library that's probably
not designed with such a restriction in mind.  If this won't work then
we need a new concept.

This will work for the GraphBLAS API.  The expanded object in my case is really just a small box struct around a GraphBLAS "handle" which is an opaque pointer to data which I cannot mutate, only the library can change the object behind the handle.  The API makes strong guarantees that it will either do the operation and return a success code or not do the operation and return an error code.  It's not possible (normally) to get a corrupt or incomplete object back.
 

One idea I was toying with is that it doesn't matter if f()
throws an error so long as the plpgsql function is not executing
within an exception block: if the error propagates out of the plpgsql
function then we no longer care about the value of the variable.
That would very substantially weaken the requirements on how f()
is implemented.  I'm not sure that we could make this work across
multiple levels of plpgsql functions (that is, if the value of the
variable ultimately resides in some outer function) but right now
that's not an issue since no plpgsql function as such would ever
promise to be safe, thus it would never receive a R/W pointer to
some outer function's variable.

The water here is pretty deep for me but I'm pretty sure I get what you're saying, I'll need to do some more studying of the plpgsql code which I've been spending the last couple of days familiarizing myself with more. 


> same with:
>             bfs_vector = vxm(bfs_vector, graph, 'any_secondi_int32',
>                              w=>bfs_vector, accum=>'min_int32');
> This matmul mutates bfs_vector, I shouldn't need to reassign it back but at
> the moment it seems necessary otherwise the mutations are lost but this
> costs a full flatten/expand cycle.

I'm not hugely excited about that.  We could imagine extending
this concept to INOUT parameters of procedures, but it doesn't
seem like that buys anything except a small notational savings.
Maybe it would work to allow multiple parameters of a procedure
to be passed as R/W, whereas we're restricted to one for the
function-notation method.  So possibly there's a gain there but
I'm not sure how big.

BTW, if I understand your example correctly then bfs_vector is
being passed to vxm() twice.

Yes, this is not unusual in this form of linear algebra, as multiple operations often accumulate into the same object to prevent a bunch of copying during each iteration of a given algorithm.  There is also a "mask" parameter where another or the same object can be provided to either mask in or out (compliment mask) values during the accumulation phase.  This is very useful for many algorithms, and good example is the Burkhardt method of Triangle Counting (https://journals.sagepub.com/doi/10.1177/1473871616666393) which in GraphBLAS boils down to:

    GrB_mxm (C, A, NULL, semiring, A, A, GrB_DESC_S) ;
    GrB_reduce (&ntri, NULL, monoid, C, NULL) ;
    ntri /= 6 ;

In this case A is three of the parameters to mxm, the left operand, right operand, and a structural mask.  This can be summed up as "A squared, masked by A", which when reduced returns the number of triangles in the graph (times 6).
 
  This brings up an interesting
point: if we pass the first instance as R/W, and vxm() manipulates it,
then the changes would also be visible in its other parameter "w".
This is certainly not per normal semantics.  A "safe" function would
have to either not have any possibility that two parameters could
refer to the same object, or be coded in a way that made it impervious
to this issue --- in your example, it couldn't look at "w" anymore
once it'd started modifying the first parameter.  Is that an okay
requirement, and if not what shall we do about it?

I *think*, if I understand you correctly, that this isn't an issue for the GraphBLAS.  My expanded objects are just boxes around an opaque handle, I don't actually mutate anything inside the box, and I can't see past the opaque pointer.  SuiteSparse may be storing the matrix in one of many different formats, or on a GPU, or who knows, all I have is a handle to "A" which I pass to GraphBLAS methods which is the only way I can interact with them.  Here's the definition of that vxm function:


It's pretty straightforward, get the arguments, and pass them to the GraphBLAS API, there is no mutable structure inside the expanded "box", just the handle.

I'm using the expanded object API to solve my two key problems, flatten an object for disk storage, expand that object (through the GraphBLAS serialize/deserialize API) and turn it into an object handle, which is secretly just a pointer of course to the internal details of the object, but I can't see or change that, only SuiteSparse can.

(btw sorry about the bad parameter names, "w" is the name from the API spec which I've been sticking to, which is the optional output object to use, if one is not passed, I create a new one, this is similar to the numpy "out" parameter semantics) .

I added some debug instrumentation that might show a bit more of what's going on for me, consider this function:

CREATE OR REPLACE FUNCTION test(graph matrix)
    RETURNS matrix LANGUAGE plpgsql AS
    $$
    DECLARE
      n bigint = nrows(graph);
      m bigint = ncols(graph);
    BEGIN
    RETURN graph;
    end;
    $$;

The graph passes straight through but first I call two methods to get the number rows and columns.  When I run it on a graph:

postgres=# select pg_typeof(test(graph)) from test_graphs ;
DEBUG:  matrix_nvals
DEBUG:  DatumGetMatrix
DEBUG:  expand_matrix
DEBUG:  new_matrix
DEBUG:  context_callback_matrix_free
DEBUG:  matrix_ncols
DEBUG:  DatumGetMatrix
DEBUG:  expand_matrix
DEBUG:  new_matrix
DEBUG:  context_callback_matrix_free
 pg_typeof
-----------
 matrix
(1 row)

THe matrix gets expanded twice, presumably because the object comes in flat, and both nrows() and ncols() expands it, which ends up being two separate handles and thus two separate objects to the GraphBLAS. 

Here's another example:

CREATE OR REPLACE FUNCTION test2(graph matrix)
    RETURNS bigint LANGUAGE plpgsql AS
    $$
    BEGIN
    perform set_element(graph, 1, 1, 1);
    RETURN nvals(graph);
    end;
    $$;
CREATE FUNCTION
postgres=# select test2(matrix('int32'));
DEBUG:  new_matrix
DEBUG:  matrix_get_flat_size
DEBUG:  flatten_matrix
DEBUG:  scalar_int32
DEBUG:  new_scalar
DEBUG:  matrix_set_element
DEBUG:  DatumGetMatrix
DEBUG:  expand_matrix
DEBUG:  new_matrix
DEBUG:  DatumGetScalar
DEBUG:  matrix_get_flat_size
DEBUG:  matrix_get_flat_size
DEBUG:  flatten_matrix
DEBUG:  context_callback_matrix_free
DEBUG:  context_callback_scalar_free
DEBUG:  matrix_nvals
DEBUG:  DatumGetMatrix
DEBUG:  expand_matrix
DEBUG:  new_matrix
DEBUG:  context_callback_matrix_free
DEBUG:  context_callback_matrix_free
 test2
-------
     0
(1 row)

I would expect that to return 1.  If I do "graph = set_element(graph, 1, 1, 1)" it works.

I hope that gives a bit more information about my use cases, in general I'm very happy with the API, it's very algebraic, I have a lot of interesting plans for supporting more operators and subscription syntax, but this issue is not my top priority to see if we can resolve it.  I'm sure I missed something in your detailed plan so I'll be going over it some more this week.  Please let me know if you have any other questions about my use case or concerns about my expectations.

Thank you!

-Michel

Re: Using Expanded Objects other than Arrays from plpgsql

From
Tom Lane
Date:
Michel Pelletier <pelletier.michel@gmail.com> writes:
> Here's another example:

> CREATE OR REPLACE FUNCTION test2(graph matrix)
>     RETURNS bigint LANGUAGE plpgsql AS
>     $$
>     BEGIN
>     perform set_element(graph, 1, 1, 1);
>     RETURN nvals(graph);
>     end;
>     $$;
> CREATE FUNCTION
> postgres=# select test2(matrix('int32'));
> DEBUG:  new_matrix
> DEBUG:  matrix_get_flat_size
> DEBUG:  flatten_matrix
> DEBUG:  scalar_int32
> DEBUG:  new_scalar
> DEBUG:  matrix_set_element
> DEBUG:  DatumGetMatrix
> DEBUG:  expand_matrix
> DEBUG:  new_matrix
> DEBUG:  DatumGetScalar
> DEBUG:  matrix_get_flat_size
> DEBUG:  matrix_get_flat_size
> DEBUG:  flatten_matrix
> DEBUG:  context_callback_matrix_free
> DEBUG:  context_callback_scalar_free
> DEBUG:  matrix_nvals
> DEBUG:  DatumGetMatrix
> DEBUG:  expand_matrix
> DEBUG:  new_matrix
> DEBUG:  context_callback_matrix_free
> DEBUG:  context_callback_matrix_free
>  test2
> -------
>      0
> (1 row)

I'm a little confused by your debug output.  What are "scalar_int32"
and "new_scalar", and what part of the plpgsql function is causing
them to be invoked?

Another thing that confuses me is why there's a second flatten_matrix
operation happening here.  Shouldn't set_element return its result
as a R/W expanded object?

> I would expect that to return 1.  If I do "graph = set_element(graph, 1, 1,
> 1)" it works.

I think you have a faulty understanding of PERFORM.  It's defined as
"evaluate this expression and throw away the result", so it's *not*
going to change "graph", not even if set_element declares that
argument as INOUT.  (Our interpretation of OUT arguments for functions
is that they're just an alternate notation for specifying the function
result.)  If you want to avoid the explicit assignment back to "graph"
then the thing to do would be to declare set_element as a procedure,
not a function, with an INOUT argument and then call it with CALL.

That's only cosmetically different though, in that the updated
"graph" value is still passed back much as if it were a function
result, and then the CALL infrastructure knows it has to assign that
back to the argument variable.  And, as I tried to explain earlier,
that code path currently has no mechanism for avoiding making a copy
of the graph somewhere along the line: it will pass "graph" to the
procedure as either a flat Datum or a R/O expanded object, so that
set_element will be required to copy that before modifying it.
We can imagine extending whatever we do for "x := f(x)" cases so that
it also works during "CALL p(x)".  But I think that's only going to
yield cosmetic or notational improvements so I don't want to start
with doing that.  Let's focus first on improving the existing
infrastructure for the f(x) case.

            regards, tom lane



Re: Using Expanded Objects other than Arrays from plpgsql

From
Tom Lane
Date:
I wrote:
> One idea I was toying with is that it doesn't matter if f()
> throws an error so long as the plpgsql function is not executing
> within an exception block: if the error propagates out of the plpgsql
> function then we no longer care about the value of the variable.
> That would very substantially weaken the requirements on how f()
> is implemented.

The more I think about this idea the better I like it.  We can
improve on the original concept a bit: the assignment can be
within an exception block so long as the target variable is too.
For example, consider

    DECLARE x float8[];
    BEGIN
       ...
       DECLARE y float8[];
       BEGIN
          x := array_append(x, 42);
          y := array_append(y, 42);
       END;
    EXCEPTION WHEN ...;
    END;

Currently, both calls of array_append are subject to R/W optimization,
so that array_append must provide a strong guarantee that it won't
throw an error after it's begun to change the R/W object.  If we
redefine things so that the optimization is applied only to "y",
then AFAICS we need nothing from array_append.  It only has to be
sure it doesn't corrupt the object so badly that it can't be freed
... but that requirement exists already, for anything dealing with
expanded objects.  So this would put us in a situation where we
could apply the optimization by default, which'd be a huge win.

There is an exception: if we are considering

          x := array_cat(x, x);

then I don't think we can optimize because of the aliasing problem
I mentioned before.  So there'd have to be a restriction that the
target variable is mentioned only once in the function's arguments.
For stuff like your vxm() function, that'd be annoying.  But functions
that need that and are willing to deal with the aliasing hazard could
still provide a prosupport function that promises it's okay.  What
we'd accomplish is that a large fraction of interesting functions
could get the benefit without having to create a prosupport function,
which is a win all around.

Also worth noting: in the above example, we could optimize the
update on "x" too, if we know that "x" is not referenced in the
block's EXCEPTION handlers.  I wouldn't bother with this in the
first version, but it might be worth doing later.

So if we go this way, the universe of functions that can benefit
from the optimization enlarges considerably, and the risk of bugs
that break the optimization drops considerably.  The cost is that
some cases that were optimized before now will not be.  But I
suspect that plpgsql functions where this optimization is key
probably don't contain EXCEPTION handlers at all, so that they
won't notice any change.

Thoughts?

            regards, tom lane



Re: Using Expanded Objects other than Arrays from plpgsql

From
Michel Pelletier
Date:
On Wed, Oct 23, 2024 at 8:21 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Michel Pelletier <pelletier.michel@gmail.com> writes:
> Here's another example:

> CREATE OR REPLACE FUNCTION test2(graph matrix)
>     RETURNS bigint LANGUAGE plpgsql AS
>     $$
>     BEGIN
>     perform set_element(graph, 1, 1, 1);
>     RETURN nvals(graph);
>     end;
>     $$;
> CREATE FUNCTION
> postgres=# select test2(matrix('int32'));
> DEBUG:  new_matrix
> DEBUG:  matrix_get_flat_size
> DEBUG:  flatten_matrix
> DEBUG:  scalar_int32
> DEBUG:  new_scalar
> DEBUG:  matrix_set_element
> DEBUG:  DatumGetMatrix
> DEBUG:  expand_matrix
> DEBUG:  new_matrix
> DEBUG:  DatumGetScalar
> DEBUG:  matrix_get_flat_size
> DEBUG:  matrix_get_flat_size
> DEBUG:  flatten_matrix
> DEBUG:  context_callback_matrix_free
> DEBUG:  context_callback_scalar_free
> DEBUG:  matrix_nvals
> DEBUG:  DatumGetMatrix
> DEBUG:  expand_matrix
> DEBUG:  new_matrix
> DEBUG:  context_callback_matrix_free
> DEBUG:  context_callback_matrix_free
>  test2
> -------
>      0
> (1 row)

I'm a little confused by your debug output.  What are "scalar_int32"
and "new_scalar", and what part of the plpgsql function is causing
them to be invoked?

GraphBLAS scalars hold a single element value for the matrix type.  Internally, they are simply 1x1 matrices (much like vectors are 1xn matrices).  The function signature is:

set_element(a matrix, i bigint, j bigint, s scalar)

There is a "CAST (integer as scalar)" function (scalar_int32) that casts Postgres integers to GraphBLAS GrB_INT32 scalar elements (which calls new_scalar because like vectors and matrices, they are expanded objects which have a GrB_Scalar "handle").  Scalars are useful for working with individual values, for example reduce() returns a scalar.  There are way more efficient ways to push huge C arrays of values into matrices but for now I'm just working at the element level.

Another thing that confuses me is why there's a second flatten_matrix
operation happening here.  Shouldn't set_element return its result
as a R/W expanded object?

That confuses me too, and my default assumption is always that I'm doing it wrong.  set_element does return a R/W object afaict, here is the return:


where:

#define OS_RETURN_MATRIX(_matrix) return EOHPGetRWDatum(&(_matrix)->hdr)


> I would expect that to return 1.  If I do "graph = set_element(graph, 1, 1,
> 1)" it works.

I think you have a faulty understanding of PERFORM.  It's defined as
"evaluate this expression and throw away the result", so it's *not*
going to change "graph", not even if set_element declares that
argument as INOUT. 

Faulty indeed, I was going from the plpgsql statement documentation:

"Sometimes it is useful to evaluate an expression or SELECT query but discard the result, for example when calling a function that has side-effects but no useful result value."

My understanding of "side-effects" was flawed there, but I'm fine with "x = set_element(x...)" anyway as I was trying to follow the example of array_append et al.
 
(Our interpretation of OUT arguments for functions
is that they're just an alternate notation for specifying the function
result.)  If you want to avoid the explicit assignment back to "graph"
then the thing to do would be to declare set_element as a procedure,
not a function, with an INOUT argument and then call it with CALL.

I'll stick with the assignment. 

That's only cosmetically different though, in that the updated
"graph" value is still passed back much as if it were a function
result, and then the CALL infrastructure knows it has to assign that
back to the argument variable.  And, as I tried to explain earlier,
that code path currently has no mechanism for avoiding making a copy
of the graph somewhere along the line: it will pass "graph" to the
procedure as either a flat Datum or a R/O expanded object, so that
set_element will be required to copy that before modifying it.

Right, I'm still figuring out exactly what that code flow is.  This is my first dive into these corners of the code so thank you for being patient with me.  I promise to write up some expanded object documentation if this works!
 
We can imagine extending whatever we do for "x := f(x)" cases so that
it also works during "CALL p(x)".  But I think that's only going to
yield cosmetic or notational improvements so I don't want to start
with doing that.  Let's focus first on improving the existing
infrastructure for the f(x) case.

+1

-Michel 

Re: Using Expanded Objects other than Arrays from plpgsql

From
Tom Lane
Date:
Michel Pelletier <pelletier.michel@gmail.com> writes:
> On Wed, Oct 23, 2024 at 8:21 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> Another thing that confuses me is why there's a second flatten_matrix
>> operation happening here.  Shouldn't set_element return its result
>> as a R/W expanded object?

> That confuses me too, and my default assumption is always that I'm doing it
> wrong.  set_element does return a R/W object afaict, here is the return:
> https://github.com/OneSparse/OneSparse/blob/main/src/matrix.c#L1726

Hmph.  That seems right.  Can you add errbacktrace() to your logging
ereports, in hopes of seeing how we're getting to flatten_matrix?
Or break there with gdb for a more complete/reliable stack trace.

            regards, tom lane



Re: Using Expanded Objects other than Arrays from plpgsql

From
Michel Pelletier
Date:
On Wed, Oct 23, 2024 at 9:04 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
I wrote:
> One idea I was toying with is that it doesn't matter if f()
> throws an error so long as the plpgsql function is not executing
> within an exception block: if the error propagates out of the plpgsql
> function then we no longer care about the value of the variable.
> That would very substantially weaken the requirements on how f()
> is implemented.

The more I think about this idea the better I like it.  We can
improve on the original concept a bit: the assignment can be
within an exception block so long as the target variable is too.
For example, consider

        DECLARE x float8[];
        BEGIN
           ...
           DECLARE y float8[];
           BEGIN
              x := array_append(x, 42);
              y := array_append(y, 42);
           END;
        EXCEPTION WHEN ...;
        END;

Currently, both calls of array_append are subject to R/W optimization,
so that array_append must provide a strong guarantee that it won't
throw an error after it's begun to change the R/W object.  If we
redefine things so that the optimization is applied only to "y",
then AFAICS we need nothing from array_append.  It only has to be
sure it doesn't corrupt the object so badly that it can't be freed
... but that requirement exists already, for anything dealing with
expanded objects.  So this would put us in a situation where we
could apply the optimization by default, which'd be a huge win.

Great!  I can make that same guarantee.
 
There is an exception: if we are considering

              x := array_cat(x, x);

then I don't think we can optimize because of the aliasing problem
I mentioned before.  So there'd have to be a restriction that the
target variable is mentioned only once in the function's arguments.
For stuff like your vxm() function, that'd be annoying.  But functions
that need that and are willing to deal with the aliasing hazard could
still provide a prosupport function that promises it's okay.  What
we'd accomplish is that a large fraction of interesting functions
could get the benefit without having to create a prosupport function,
which is a win all around.

I see, I'm not completely clear on the prosupport code so let me repeat it back just so I know I'm getting it right, it looks like I'll need to write a C function, that I specify with CREATE FUNCTION ... SUPPORT, that the planner will call asking me to tell it that it's ok to alias arguments (which is fine for SuiteSparse so no problem).  You also mentioned a couple emails back about a "type support" feature similar to prosupport, that would allow me to specify an eager expansion function for my types.
 
Also worth noting: in the above example, we could optimize the
update on "x" too, if we know that "x" is not referenced in the
block's EXCEPTION handlers.  I wouldn't bother with this in the
first version, but it might be worth doing later.

So if we go this way, the universe of functions that can benefit
from the optimization enlarges considerably, and the risk of bugs
that break the optimization drops considerably.  The cost is that
some cases that were optimized before now will not be.  But I
suspect that plpgsql functions where this optimization is key
probably don't contain EXCEPTION handlers at all, so that they
won't notice any change.

Sounds like a good tradeoff to me!  Hopefully if anyone does have concerns with this approach they'll see this thread and comment.

Thanks,

-Michel

                        regards, tom lane

Re: Using Expanded Objects other than Arrays from plpgsql

From
Tom Lane
Date:
Michel Pelletier <pelletier.michel@gmail.com> writes:
> On Wed, Oct 23, 2024 at 9:04 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> For stuff like your vxm() function, that'd be annoying.  But functions
>> that need that and are willing to deal with the aliasing hazard could
>> still provide a prosupport function that promises it's okay.  What
>> we'd accomplish is that a large fraction of interesting functions
>> could get the benefit without having to create a prosupport function,
>> which is a win all around.

> I see, I'm not completely clear on the prosupport code so let me repeat it
> back just so I know I'm getting it right, it looks like I'll need to write
> a C function, that I specify with CREATE FUNCTION ... SUPPORT, that the
> planner will call asking me to tell it that it's ok to alias arguments
> (which is fine for SuiteSparse so no problem).  You also mentioned a couple
> emails back about a "type support" feature similar to prosupport, that
> would allow me to specify an eager expansion function for my types.

Right.  You could omit the support function for anything where
aliasing is impossible (e.g., there's only one matrix argument).
Also, very possibly multiple functions could share the same
support function.

>> Also worth noting: in the above example, we could optimize the
>> update on "x" too, if we know that "x" is not referenced in the
>> block's EXCEPTION handlers.  I wouldn't bother with this in the
>> first version, but it might be worth doing later.

Aside: I'm less excited about that than I was earlier.  It's not wrong
for the example I gave, but in more general cases it doesn't work:

         DECLARE x float8[];
         BEGIN
            ...
            BEGIN
               x := array_append(x, 42);
            EXCEPTION WHEN ...;
            END;
         // we might use x here
         END;

So for an "x" in a further-out DECLARE section, we'd have to track not
only whether x is used in the EXCEPTION clause, but whether it's used
anyplace that can be reached after the exception block ends.  That's
starting to sound overly complicated for the benefit.

>> So if we go this way, the universe of functions that can benefit
>> from the optimization enlarges considerably, and the risk of bugs
>> that break the optimization drops considerably.  The cost is that
>> some cases that were optimized before now will not be.

> Sounds like a good tradeoff to me!  Hopefully if anyone does have concerns
> with this approach they'll see this thread and comment.

I realized that we could suppress any complaints of that ilk by
creating prosupport functions for the three built-in functions that
are handled today, allowing them to operate on the same rules as now.
I might not bother with that on purely performance grounds, but it's
likely worth doing simply to provide an in-core test case for the code
path where a prosupport function can be called.  I'm still writing up
details, but right now I'm envisioning completely separate sets of
rules for the prosupport case versus the no-prosupport case.

            regards, tom lane



Re: Using Expanded Objects other than Arrays from plpgsql

From
Tom Lane
Date:
I wrote:
> ... I'm still writing up
> details, but right now I'm envisioning completely separate sets of
> rules for the prosupport case versus the no-prosupport case.

So here is the design I've come up with for optimizing R/W expanded
object updates in plpgsql without any special knowledge from a
prosupport function.  AFAICS this requires no assumptions at all
about the behavior of called functions, other than the bare minimum
"you can't corrupt the object to the point where it wouldn't be
cleanly free-able".  In particular that means it can work for
user-written called functions in plpgsql, SQL, or whatever, not
only for C-coded functions.

There are two requirements to apply the optimization:

* If the assignment statement is within a BEGIN ... EXCEPTION block,
its target variable must be declared inside the most-closely-nested
such block.  This ensures that if an error is thrown from within the
assignment statement's expression, we do not care about the value
of the target variable, except to the extent of being able to clean
it up.

* The target variable must be referenced exactly once within the
RHS expression.  This avoids aliasing hazards such as we discussed
upthread.  But unlike the existing rule, that reference can be
anywhere in the RHS --- it doesn't have to be an argument of the
topmost function.  So for example we can optimize
    foo := fee(fi(fo(fum(foo), other_variable), ...));

While I've not tried to write any code yet, I think both of these
conditions should be reasonably easy to verify.

Given that those conditions are met and the current value of the
assignment target variable is a R/W expanded pointer, we can
execute the assignment as follows:

* Provide the R/W expanded pointer as the value of the Param
representing the sole reference.  At the same time, apply
TransferExpandedObject to reparent the object under the transient
eval_mcontext memory context that's being used to evaluate the
RHS expression, and then set the target variable's value to NULL.
(At this point, the R/W object has exactly the same logical
status as any intermediate calculation result that's palloc'd
in the eval_mcontext.)

* At successful completion of the RHS, assign the result to the
target variable normally.  This includes, if it's an R/W expanded
object, reparenting it under the calling function's main context.

If the RHS expression winds up returning the same expanded object
(probably mutated), then the outer function regains ownership of it
after no more than a couple of TransferExpandedObject calls, which
are cheap.  If the RHS returns something different, then either the
original expanded object got discarded during RHS evaluation, or it
will be cleaned up when we reset the eval_mcontext, so that it won't
be leaked.

I didn't originally foresee the need to transfer the object
into the transient memory context.  But this design avoids any
possibility of double-free attempts, which would be likely to
happen if we allow the outer function's variable to hold onto
a reference to the object while the RHS is evaluated.  A function
receiving an R/W reference is entitled to assume that that value
is not otherwise referenced and can be freed when it's no longer
needed, so it might well get freed during RHS evaluation.  By
converting the original R/W object into (effectively) a temporary
value within the RHS evaluation, we make that assumption valid.

So, while this design greatly expands the set of cases we can
optimize, it does lose some cases that the old approach could
support.  I envision addressing that by allowing a prosupport
function attached to the RHS' topmost function to "bless"
other cases as safe, using reasoning similar to the old rules.
(Or different rules, even, but it's on the prosupport function
to be sure it's safe.)  I don't have a detailed design in mind,
but I'm thinking along the lines of just passing the whole RHS
expression to the prosupport function and letting it decide
what's safe.  In any case, we don't need to even call the
prosupport function unless there's an exception block or
multiple RHS references to the target variable.

            regards, tom lane