Volkan Unsal <spocksplanet@gmail.com> writes:
> I'm trying to remove a key from a jsonb column in a table with 10K rows,
> and the performance is abysmal. When the key is missing, it takes 5
> minutes. When the key is present, it takes even longer.
How wide are the jsonb values? It seems likely that most of this
is TOAST overhead [1], ie time to reassemble wide jsonb values
and then split them up again.
As Tumasgiu already mentioned, it'd likely be useful to suppress
updates of rows that don't actually need to change, assuming that
the key appears in a minority of rows. And an index could help
even more, by avoiding the need to reconstruct wide values to
see if the key appears in them.
Of course, if most of the rows need an update, neither of these
will help and you just gotta live with it. Possibly reconsider
your approach of using a large JSONB value to contain fields
you need to update individually. That's never going to be great
for performance. SQL (or at least Postgres) is incapable of
updating portions of columns efficiently.
regards, tom lane
[1] https://www.postgresql.org/docs/current/storage-toast.html