Re: Rules and decision logic triggered by / acting on JSONB nodes inPostgres - Mailing list pgsql-general

From Guyren Howe
Subject Re: Rules and decision logic triggered by / acting on JSONB nodes inPostgres
Date
Msg-id 7F877E97-57CA-46AF-914F-30EE48481B30@gmail.com
Whole thread Raw
In response to Rules and decision logic triggered by / acting on JSONB nodes inPostgres  (Amit Kothari <amit@tallyfy.com>)
List pgsql-general
On Sep 1, 2018, at 7:14 , Amit Kothari <amit@tallyfy.com> wrote:

Hi all,

I’m sorry if this question is off-topic for postgres-core, but didn’t know where else to ask.

We’ve built a workflow and BPM (business process management) tool with pg as the database, and we’d like to explore conditional logic within steps in a workflow being triggers as functions when a certain JSONB node changes.

An example -> If (form field contains “bob”) then (change state of another object)

In effect, since all our objects are JSONB - we wanted to “listen” to a change in one JSONB node in order to make a corresponding change in another JSONB node.

Can this be either native in postgres, or even a pg function?

In general - how do people approach rules and decision logic in Postgres at a very large scale e.g. say 100,000 rules that must be evaluated on a commit?

If it makes more sense - please see the example screenshot on our features page at https://tallyfy.com/features - which visualizes conditional branching.

That *looks* like each team gets their own workflows, so you could restrict your rules search to the rules for the team that created the change. Anything you could do to restrict the number of rules would help, of course.

One option would be to express rules as an (antecedent, consequent) pair, where the antecedent is the argument to json_extract_path and the consequent is the rest of the arguments to a function that carries out the change — something along those lines, in any event.

Iterating through a small set of such rules would be tolerable. If you really do have 100,000 rules that have to be applied to each change, my first thought is to use a SAT solver. But even there, you would have to compute the results of all your antecedents, so it would depend how many of those you have.

You might use a map-reduce farm, but all the machines are going to be hitting your database pretty hard.

You might be able to reduce your 100,000 antecedents to something a bit more tractable by finding the overlaps between them and so only computing the shared parts of antecedents once.

You might want to use LISTEN-NOTIFY so an external logic engine — perhaps using map-reduce, perhaps you just have a set of logic engines that each takes some part of that 100,000 rules and just brute forces them — gets to know about the changes.

I’d need to know some more details about what you’re up to, to give you a more specific answer, but that’s what occurs to me given what you’ve told us.

pgsql-general by date:

Previous
From: Olivier Leprêtre
Date:
Subject: RE: TR: redundant constraint_schema
Next
From: Dave Cramer
Date:
Subject: Re: very slow largeobject transfers through JDBC