Thread: VACUUM FULL versus TOAST

VACUUM FULL versus TOAST

From
Tom Lane
Date:
So I've gotten things fixed to the point where the regression tests seem
to not fall over when contending with concurrent "vacuum full pg_class",
and now expanded the scope of the testing to all the system catalogs.
What's failing for me now is this chunk in opr_sanity:

*** 209,219 ****     NOT p1.proisagg AND NOT p2.proisagg AND     (p1.proargtypes[3] < p2.proargtypes[3]) ORDER BY 1,
2;
!  proargtypes | proargtypes 
! -------------+-------------
!         1114 |        1184
! (1 row)
!  SELECT DISTINCT p1.proargtypes[4], p2.proargtypes[4] FROM pg_proc AS p1, pg_proc AS p2 WHERE p1.oid != p2.oid AND
--- 209,215 ----     NOT p1.proisagg AND NOT p2.proisagg AND     (p1.proargtypes[3] < p2.proargtypes[3]) ORDER BY 1,
2;
! ERROR:  missing chunk number 0 for toast value 23902886 in pg_toast_2619 SELECT DISTINCT p1.proargtypes[4],
p2.proargtypes[4]FROM pg_proc AS p1, pg_proc AS p2 WHERE p1.oid != p2.oid AND
 

On investigation, this turns out to occur when the planner is trying to
fetch the value of a toasted attribute in a cached pg_statistic tuple,
and a concurrent "vacuum full pg_statistic" has just finished.  The
problem of course is that vacuum full reassigned all the toast item OIDs
in pg_statistic, so the one we have our hands on is no longer correct.

In general, *any* access to a potentially toasted attribute value in a
catcache entry is at risk here.  I don't think it's going to be
feasible, either from a notational or efficiency standpoint, to insist
that callers always re-lock the source catalog before fetching a
catcache entry from which we might wish to extract a potentially toasted
attribute.

I am thinking that the most reasonable solution is instead to fix VACUUM
FULL/CLUSTER so that they don't change existing toast item OIDs when
vacuuming a system catalog.  They already do some pretty ugly things to
avoid changing the toast table's OID in this case, and locking down the
item OIDs too doesn't seem that much harder.  (Though I've not actually
looked at the code yet...)

The main potential drawback here is that if any varlena items that had
not previously been toasted got toasted, they would require additional
OIDs to be assigned, possibly leading to a duplicate-OID failure.  This
should not happen unless somebody decides to play with the attstorage
properties of a system catalog, and I don't feel too bad about a small
possibility of VAC FULL failing after that.  (Note it should eventually
succeed if you keep trying, since the generated OIDs would keep
changing.)

Thoughts?
        regards, tom lane


Re: VACUUM FULL versus TOAST

From
Heikki Linnakangas
Date:
On 14.08.2011 01:13, Tom Lane wrote:
> On investigation, this turns out to occur when the planner is trying to
> fetch the value of a toasted attribute in a cached pg_statistic tuple,
> and a concurrent "vacuum full pg_statistic" has just finished.  The
> problem of course is that vacuum full reassigned all the toast item OIDs
> in pg_statistic, so the one we have our hands on is no longer correct.
>
> In general, *any* access to a potentially toasted attribute value in a
> catcache entry is at risk here.  I don't think it's going to be
> feasible, either from a notational or efficiency standpoint, to insist
> that callers always re-lock the source catalog before fetching a
> catcache entry from which we might wish to extract a potentially toasted
> attribute.
>
> I am thinking that the most reasonable solution is instead to fix VACUUM
> FULL/CLUSTER so that they don't change existing toast item OIDs when
> vacuuming a system catalog.  They already do some pretty ugly things to
> avoid changing the toast table's OID in this case, and locking down the
> item OIDs too doesn't seem that much harder.  (Though I've not actually
> looked at the code yet...)

How about detoasting all datums before caching them? It's surprising 
that a datum that is supposedly in a catalog cache, actually needs disk 
access to use.

--   Heikki Linnakangas  EnterpriseDB   http://www.enterprisedb.com


Re: VACUUM FULL versus TOAST

From
Tom Lane
Date:
Heikki Linnakangas <heikki.linnakangas@enterprisedb.com> writes:
> On 14.08.2011 01:13, Tom Lane wrote:
>> I am thinking that the most reasonable solution is instead to fix VACUUM
>> FULL/CLUSTER so that they don't change existing toast item OIDs when
>> vacuuming a system catalog.  They already do some pretty ugly things to
>> avoid changing the toast table's OID in this case, and locking down the
>> item OIDs too doesn't seem that much harder.  (Though I've not actually
>> looked at the code yet...)

> How about detoasting all datums before caching them? It's surprising 
> that a datum that is supposedly in a catalog cache, actually needs disk 
> access to use.

Don't really want to fix a VACUUM-FULL-induced problem by inserting
distributed overhead into every other operation.

There would be some merit in your suggestion if we knew that all/most
toasted columns would actually get fetched out of the catcache entry
at some point.  Then we'd only be moving the cost around, and might even
save something on repeated accesses.  But I don't think we know that.
In the specific example at hand (pg_statistic entries) it's entirely
plausible that the planner would only need the histogram, or only need
the MCV list, depending on the sorts of queries it was coping with.

There's also a concern of bloating the catcaches if we do that ...
        regards, tom lane


Re: VACUUM FULL versus TOAST

From
Tom Lane
Date:
I wrote:
> I am thinking that the most reasonable solution is instead to fix VACUUM
> FULL/CLUSTER so that they don't change existing toast item OIDs when
> vacuuming a system catalog.  They already do some pretty ugly things to
> avoid changing the toast table's OID in this case, and locking down the
> item OIDs too doesn't seem that much harder.  (Though I've not actually
> looked at the code yet...)

Attached is a proposed patch for this.

> The main potential drawback here is that if any varlena items that had
> not previously been toasted got toasted, they would require additional
> OIDs to be assigned, possibly leading to a duplicate-OID failure.  This
> should not happen unless somebody decides to play with the attstorage
> properties of a system catalog, and I don't feel too bad about a small
> possibility of VAC FULL failing after that.  (Note it should eventually
> succeed if you keep trying, since the generated OIDs would keep
> changing.)

I realized that there is an easy fix for that: since tuptoaster.c
already knows what the old toast table OID is, it can just look into
that table to see if each proposed new OID is already in use, and
iterate till it gets a non-conflicting OID.  This may seem kind of
inefficient, but since it's such a corner case, I don't think the code
path will get hit often enough to matter.

Comments?

            regards, tom lane

diff --git a/src/backend/access/heap/tuptoaster.c b/src/backend/access/heap/tuptoaster.c
index 4f4dd69291fd50008f8e313176a02cd5bc955e08..785c679879012508b4925b4f8ce93e1c712f7ec4 100644
*** a/src/backend/access/heap/tuptoaster.c
--- b/src/backend/access/heap/tuptoaster.c
*************** do { \
*** 74,80 ****


  static void toast_delete_datum(Relation rel, Datum value);
! static Datum toast_save_datum(Relation rel, Datum value, int options);
  static struct varlena *toast_fetch_datum(struct varlena * attr);
  static struct varlena *toast_fetch_datum_slice(struct varlena * attr,
                          int32 sliceoffset, int32 length);
--- 74,82 ----


  static void toast_delete_datum(Relation rel, Datum value);
! static Datum toast_save_datum(Relation rel, Datum value,
!                  struct varlena *oldexternal, int options);
! static bool toast_valueid_exists(Oid toastrelid, Oid valueid);
  static struct varlena *toast_fetch_datum(struct varlena * attr);
  static struct varlena *toast_fetch_datum_slice(struct varlena * attr,
                          int32 sliceoffset, int32 length);
*************** toast_insert_or_update(Relation rel, Hea
*** 431,436 ****
--- 433,439 ----
      bool        toast_oldisnull[MaxHeapAttributeNumber];
      Datum        toast_values[MaxHeapAttributeNumber];
      Datum        toast_oldvalues[MaxHeapAttributeNumber];
+     struct varlena *toast_oldexternal[MaxHeapAttributeNumber];
      int32        toast_sizes[MaxHeapAttributeNumber];
      bool        toast_free[MaxHeapAttributeNumber];
      bool        toast_delold[MaxHeapAttributeNumber];
*************** toast_insert_or_update(Relation rel, Hea
*** 466,471 ****
--- 469,475 ----
       * ----------
       */
      memset(toast_action, ' ', numAttrs * sizeof(char));
+     memset(toast_oldexternal, 0, numAttrs * sizeof(struct varlena *));
      memset(toast_free, 0, numAttrs * sizeof(bool));
      memset(toast_delold, 0, numAttrs * sizeof(bool));

*************** toast_insert_or_update(Relation rel, Hea
*** 550,555 ****
--- 554,560 ----
               */
              if (VARATT_IS_EXTERNAL(new_value))
              {
+                 toast_oldexternal[i] = new_value;
                  if (att[i]->attstorage == 'p')
                      new_value = heap_tuple_untoast_attr(new_value);
                  else
*************** toast_insert_or_update(Relation rel, Hea
*** 676,682 ****
          {
              old_value = toast_values[i];
              toast_action[i] = 'p';
!             toast_values[i] = toast_save_datum(rel, toast_values[i], options);
              if (toast_free[i])
                  pfree(DatumGetPointer(old_value));
              toast_free[i] = true;
--- 681,688 ----
          {
              old_value = toast_values[i];
              toast_action[i] = 'p';
!             toast_values[i] = toast_save_datum(rel, toast_values[i],
!                                                toast_oldexternal[i], options);
              if (toast_free[i])
                  pfree(DatumGetPointer(old_value));
              toast_free[i] = true;
*************** toast_insert_or_update(Relation rel, Hea
*** 726,732 ****
          i = biggest_attno;
          old_value = toast_values[i];
          toast_action[i] = 'p';
!         toast_values[i] = toast_save_datum(rel, toast_values[i], options);
          if (toast_free[i])
              pfree(DatumGetPointer(old_value));
          toast_free[i] = true;
--- 732,739 ----
          i = biggest_attno;
          old_value = toast_values[i];
          toast_action[i] = 'p';
!         toast_values[i] = toast_save_datum(rel, toast_values[i],
!                                            toast_oldexternal[i], options);
          if (toast_free[i])
              pfree(DatumGetPointer(old_value));
          toast_free[i] = true;
*************** toast_insert_or_update(Relation rel, Hea
*** 839,845 ****
          i = biggest_attno;
          old_value = toast_values[i];
          toast_action[i] = 'p';
!         toast_values[i] = toast_save_datum(rel, toast_values[i], options);
          if (toast_free[i])
              pfree(DatumGetPointer(old_value));
          toast_free[i] = true;
--- 846,853 ----
          i = biggest_attno;
          old_value = toast_values[i];
          toast_action[i] = 'p';
!         toast_values[i] = toast_save_datum(rel, toast_values[i],
!                                            toast_oldexternal[i], options);
          if (toast_free[i])
              pfree(DatumGetPointer(old_value));
          toast_free[i] = true;
*************** toast_compress_datum(Datum value)
*** 1117,1126 ****
   *
   *    Save one single datum into the secondary relation and return
   *    a Datum reference for it.
   * ----------
   */
  static Datum
! toast_save_datum(Relation rel, Datum value, int options)
  {
      Relation    toastrel;
      Relation    toastidx;
--- 1125,1140 ----
   *
   *    Save one single datum into the secondary relation and return
   *    a Datum reference for it.
+  *
+  * rel: the main relation we're working with (not the toast rel!)
+  * value: datum to be pushed to toast storage
+  * oldexternal: if not NULL, toast pointer previously representing the datum
+  * options: options to be passed to heap_insert() for toast rows
   * ----------
   */
  static Datum
! toast_save_datum(Relation rel, Datum value,
!                  struct varlena *oldexternal, int options)
  {
      Relation    toastrel;
      Relation    toastidx;
*************** toast_save_datum(Relation rel, Datum val
*** 1199,1209 ****
          toast_pointer.va_toastrelid = RelationGetRelid(toastrel);

      /*
!      * Choose an unused OID within the toast table for this toast value.
       */
!     toast_pointer.va_valueid = GetNewOidWithIndex(toastrel,
!                                                   RelationGetRelid(toastidx),
!                                                   (AttrNumber) 1);

      /*
       * Initialize constant parts of the tuple data
--- 1213,1267 ----
          toast_pointer.va_toastrelid = RelationGetRelid(toastrel);

      /*
!      * Choose an OID to use as the value ID for this toast value.
!      *
!      * Normally we just choose an unused OID within the toast table.  But
!      * during table-rewriting operations where we are preserving an existing
!      * toast table OID, we want to preserve toast value OIDs too.  So, if
!      * rd_toastoid is set and we had a prior external value from that same
!      * toast table, re-use its value ID.  If we didn't have a prior external
!      * value (which is a corner case, but possible if the table's attstorage
!      * options have been changed), we have to pick a value ID that doesn't
!      * conflict with either new or existing toast value OIDs.
       */
!     if (!OidIsValid(rel->rd_toastoid))
!     {
!         /* normal case: just choose an unused OID */
!         toast_pointer.va_valueid =
!             GetNewOidWithIndex(toastrel,
!                                RelationGetRelid(toastidx),
!                                (AttrNumber) 1);
!     }
!     else
!     {
!         /* rewrite case: check to see if value was in old toast table */
!         toast_pointer.va_valueid = InvalidOid;
!         if (oldexternal != NULL)
!         {
!             struct varatt_external old_toast_pointer;
!
!             Assert(VARATT_IS_EXTERNAL(oldexternal));
!             /* Must copy to access aligned fields */
!             VARATT_EXTERNAL_GET_POINTER(old_toast_pointer, oldexternal);
!             if (old_toast_pointer.va_toastrelid == rel->rd_toastoid)
!                 toast_pointer.va_valueid = old_toast_pointer.va_valueid;
!         }
!         if (toast_pointer.va_valueid == InvalidOid)
!         {
!            /*
!             * new value; must choose an OID that doesn't conflict in either
!             * old or new toast table
!             */
!             do
!             {
!                 toast_pointer.va_valueid =
!                     GetNewOidWithIndex(toastrel,
!                                        RelationGetRelid(toastidx),
!                                        (AttrNumber) 1);
!             } while (toast_valueid_exists(rel->rd_toastoid,
!                                           toast_pointer.va_valueid));
!         }
!     }

      /*
       * Initialize constant parts of the tuple data
*************** toast_delete_datum(Relation rel, Datum v
*** 1339,1344 ****
--- 1397,1448 ----


  /* ----------
+  * toast_valueid_exists -
+  *
+  *    Test whether a toast value with the given ID exists in the toast relation
+  * ----------
+  */
+ static bool
+ toast_valueid_exists(Oid toastrelid, Oid valueid)
+ {
+     bool        result = false;
+     Relation    toastrel;
+     ScanKeyData toastkey;
+     SysScanDesc toastscan;
+
+     /*
+      * Open the toast relation
+      */
+     toastrel = heap_open(toastrelid, AccessShareLock);
+
+     /*
+      * Setup a scan key to find chunks with matching va_valueid
+      */
+     ScanKeyInit(&toastkey,
+                 (AttrNumber) 1,
+                 BTEqualStrategyNumber, F_OIDEQ,
+                 ObjectIdGetDatum(valueid));
+
+     /*
+      * Is there any such chunk?
+      */
+     toastscan = systable_beginscan(toastrel, toastrel->rd_rel->reltoastidxid,
+                                    true, SnapshotToast, 1, &toastkey);
+
+     if (systable_getnext(toastscan) != NULL)
+         result = true;
+
+     /*
+      * End scan and close relations
+      */
+     systable_endscan(toastscan);
+     heap_close(toastrel, AccessShareLock);
+
+     return result;
+ }
+
+
+ /* ----------
   * toast_fetch_datum -
   *
   *    Reconstruct an in memory Datum from the chunks saved
diff --git a/src/backend/commands/cluster.c b/src/backend/commands/cluster.c
index 9a7649bb4f95581e0169f0a341a9d8d0b1287db3..670d29ea83192a82297100a16b59e7a48303f1fd 100644
*** a/src/backend/commands/cluster.c
--- b/src/backend/commands/cluster.c
*************** copy_heap_data(Oid OIDNewHeap, Oid OIDOl
*** 797,802 ****
--- 797,806 ----
           * When doing swap by content, any toast pointers written into NewHeap
           * must use the old toast table's OID, because that's where the toast
           * data will eventually be found.  Set this up by setting rd_toastoid.
+          * This also tells tuptoaster.c to preserve the toast value OIDs,
+          * which we want so as not to invalidate toast pointers in system
+          * catalog caches.
+          *
           * Note that we must hold NewHeap open until we are done writing data,
           * since the relcache will not guarantee to remember this setting once
           * the relation is closed.    Also, this technique depends on the fact

Re: VACUUM FULL versus TOAST

From
Greg Stark
Date:
On Sun, Aug 14, 2011 at 5:15 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> There would be some merit in your suggestion if we knew that all/most
> toasted columns would actually get fetched out of the catcache entry
> at some point.  Then we'd only be moving the cost around, and might even
> save something on repeated accesses.  But I don't think we know that.
> In the specific example at hand (pg_statistic entries) it's entirely
> plausible that the planner would only need the histogram, or only need
> the MCV list, depending on the sorts of queries it was coping with.

Fwiw detoasting statistics entries sounds like a fine idea to me. I've
often seen queries that are unexpectedly slow to plan and chalked it
up to statistics entries getting toasted. If it's ok to read either
the histogram or MVC list from disk every time we plan a query then
why are we bothering with an in-memory cache of the statistics at all?

The only thing that gives me pause is that it's possible these entries
are *really* large. If you have a decent number of tables that are all
a few megabytes of histograms then things could go poorly. But I don't
think having to read in these entries from pg_toast every time you
plan a query is going to go much better for you either.



--
greg


Re: VACUUM FULL versus TOAST

From
Robert Haas
Date:
On Sun, Aug 14, 2011 at 7:43 PM, Greg Stark <stark@mit.edu> wrote:
> On Sun, Aug 14, 2011 at 5:15 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> There would be some merit in your suggestion if we knew that all/most
>> toasted columns would actually get fetched out of the catcache entry
>> at some point.  Then we'd only be moving the cost around, and might even
>> save something on repeated accesses.  But I don't think we know that.
>> In the specific example at hand (pg_statistic entries) it's entirely
>> plausible that the planner would only need the histogram, or only need
>> the MCV list, depending on the sorts of queries it was coping with.
>
> Fwiw detoasting statistics entries sounds like a fine idea to me. I've
> often seen queries that are unexpectedly slow to plan and chalked it
> up to statistics entries getting toasted. If it's ok to read either
> the histogram or MVC list from disk every time we plan a query then
> why are we bothering with an in-memory cache of the statistics at all?
>
> The only thing that gives me pause is that it's possible these entries
> are *really* large. If you have a decent number of tables that are all
> a few megabytes of histograms then things could go poorly. But I don't
> think having to read in these entries from pg_toast every time you
> plan a query is going to go much better for you either.

Yep; in fact, I've previously submitted test results showing that
repeatedly decompressing TOAST entries can significantly slow down
query planning.

That having been said, Tom's fix seems safer to back-patch.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company