Thread: Heavy write activity on first vacuum of fresh TOAST data

Heavy write activity on first vacuum of fresh TOAST data

From
"Kevin Grittner"
Date:
Yesterday we moved a 300 GB table containing document images (mostly raster-scanned from paper), into a 215 GB
PostgreSQL8.2.5 database which contains the related case management data.  (This separation was never "right", since
therewere links from one to the other, but was necessary under our previous database package for practical reasons.) 

The data was inserted through a Java program using a prepared statement with no indexes on the table.  The primary key
wasthen added, and now I've started a vacuum.  The new table wound up being the first big table vacuumed, and I noticed
somethingodd.  Even though there have been no rollbacks, updates, or deletes on this table, the vacuum is writing as
muchas it is reading while dealing with the TOAST data. 

Here's the current tail of the VACUUM ANALYZE VERBOSE output:

INFO:  analyzing "pg_catalog.pg_auth_members"
INFO:  "pg_auth_members": scanned 0 of 0 pages, containing 0 live rows and 0 dead rows; 0 rows in sample, 0 estimated
totalrows 
INFO:  vacuuming "public.DocImage"
INFO:  index "DocImage_pkey" now contains 2744753 row versions in 10571 pages
DETAIL:  0 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 0.15s/0.01u sec elapsed 0.28 sec.
INFO:  "DocImage": found 0 removable, 2744753 nonremovable row versions in 22901 pages
DETAIL:  0 dead row versions cannot be removed yet.
There were 0 unused item pointers.
32 pages contain useful free space.
0 pages are entirely empty.
CPU 0.46s/0.10u sec elapsed 1.12 sec.
INFO:  vacuuming "pg_toast.pg_toast_7729979"

And here's a snippet from vmstat 1 output:

procs -----------memory---------- ---swap-- -----io---- -system-- -----cpu------
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 0  1    156 316500    556 63491808    0    0 58544 57910 1459 2417  2  4 85  9  0
 0  1    156 317884    556 63490780    0    0 61240 62087 1425 2106  2  5 84  9  0
 0  3    156 307500    556 63499004    0    0 56968 57882 1472 2091  2  5 84 10  0
 2  0    156 309280    556 63497976    0    0 59920 58218 1600 4503  5  4 79 11  0
 0  1    156 313592    556 63494892    0    0 57608 62371 1695 3425  3  5 84  8  0
 2  1    156 305844    556 63502088    0    0 54568 58164 1644 2962  3  4 84  9  0
 0  1    156 306560    556 63502088    0    0 61080 57949 1494 2808  3  5 83  9  0
 1  0    156 303432    552 63505176    0    0 49784 53972 1481 2629  2  4 84 10  0
 0  1    156 308232    552 63500036    0    0 57496 57903 1426 1954  1  4 85  9  0
 1  0    156 309008    552 63499008    0    0 62000 61962 1442 2401  2  4 85  8  0

It's been like this for over half an hour.  Not that I expect a vacuum of a 300 GB table to be blindingly fast, but if
thedata has just been inserted, why all those writes? 

-Kevin

 PostgreSQL 8.2.5 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 20070115 (prerelease) (SUSE Linux)

ccsa@SOCRATES:/var/pgsql/data/cc> free -m
             total       used       free     shared    buffers     cached
Mem:         64446      64147        298          0          0      62018
-/+ buffers/cache:       2128      62318
Swap:         1027          0       1027

listen_addresses = '*'
port = 5412
max_connections = 200
shared_buffers = 160MB
temp_buffers = 50MB
work_mem = 32MB
maintenance_work_mem = 1GB
max_fsm_pages = 800000
bgwriter_lru_percent = 20.0
bgwriter_lru_maxpages = 200
bgwriter_all_percent = 10.0
bgwriter_all_maxpages = 600
wal_buffers = 1MB
checkpoint_segments = 50
checkpoint_timeout = 30min
seq_page_cost = 0.5
random_page_cost = 0.8
effective_cache_size = 63GB
geqo = off
from_collapse_limit = 15
join_collapse_limit = 15
redirect_stderr = on
log_line_prefix = '[%m] %p %q<%u %d %r> '
stats_block_level = on
stats_row_level = on
autovacuum = on
autovacuum_naptime = 10s
autovacuum_vacuum_threshold = 1
autovacuum_analyze_threshold = 1
datestyle = 'iso, mdy'
lc_messages = 'C'
lc_monetary = 'C'
lc_numeric = 'C'
lc_time = 'C'
escape_string_warning = off
standard_conforming_strings = on
sql_inheritance = off

BINDIR = /usr/local/pgsql-8.2.5/bin
DOCDIR = /usr/local/pgsql-8.2.5/doc
INCLUDEDIR = /usr/local/pgsql-8.2.5/include
PKGINCLUDEDIR = /usr/local/pgsql-8.2.5/include
INCLUDEDIR-SERVER = /usr/local/pgsql-8.2.5/include/server
LIBDIR = /usr/local/pgsql-8.2.5/lib
PKGLIBDIR = /usr/local/pgsql-8.2.5/lib
LOCALEDIR =
MANDIR = /usr/local/pgsql-8.2.5/man
SHAREDIR = /usr/local/pgsql-8.2.5/share
SYSCONFDIR = /usr/local/pgsql-8.2.5/etc
PGXS = /usr/local/pgsql-8.2.5/lib/pgxs/src/makefiles/pgxs.mk
CONFIGURE = '--prefix=/usr/local/pgsql-8.2.5' '--enable-integer-datetimes' '--enable-debug' '--disable-nls'
CC = gcc
CPPFLAGS = -D_GNU_SOURCE
CFLAGS = -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Winline -Wdeclaration-after-statement -Wendif-labels
-fno-strict-aliasing-g 
CFLAGS_SL = -fpic
LDFLAGS = -Wl,-rpath,'/usr/local/pgsql-8.2.5/lib'
LDFLAGS_SL =
LIBS = -lpgport -lz -lreadline -lcrypt -ldl -lm
VERSION = PostgreSQL 8.2.5

       Table "public.DocImage"
  Column   |     Type     | Modifiers
-----------+--------------+-----------
 countyNo  | "CountyNoT"  | not null
 docId     | "DocIdT"     | not null
 sectionNo | "SectionNoT" | not null
 docImage  | "ImageT"     | not null
Indexes:
    "DocImage_pkey" PRIMARY KEY, btree ("countyNo", "docId", "sectionNo")

 Schema |  Name      |  Type    | Modifier | Check
--------+------------+----------+----------+-------
 public | CountyNoT  | smallint |          |
 public | DocIdT     | integer  |          |
 public | SectionNoT | integer  |          |
 public | ImageT     | bytea    |          |



Re: Heavy write activity on first vacuum of fresh TOAST data

From
Simon Riggs
Date:
On Thu, 2007-12-13 at 09:46 -0600, Kevin Grittner wrote:

> The data was inserted through a Java program using a prepared
> statement with no indexes on the table.  The primary key was then
> added, and now I've started a vacuum.  The new table wound up being
> the first big table vacuumed, and I noticed something odd.  Even
> though there have been no rollbacks, updates, or deletes on this
> table, the vacuum is writing as much as it is reading while dealing
> with the TOAST data.

Writing hint bits. Annoying isn't it? :-(

--
  Simon Riggs
  2ndQuadrant  http://www.2ndQuadrant.com


Re: Heavy write activity on first vacuum of fresh TOAST data

From
"Kevin Grittner"
Date:
>>> On Thu, Dec 13, 2007 at 10:11 AM, in message
<1197562283.4255.1829.camel@ebony.site>, Simon Riggs <simon@2ndquadrant.com>
wrote:
> On Thu, 2007-12-13 at 09:46 -0600, Kevin Grittner wrote:
>
>> The data was inserted through a Java program using a prepared
>> statement with no indexes on the table.  The primary key was then
>> added, and now I've started a vacuum.  The new table wound up being
>> the first big table vacuumed, and I noticed something odd.  Even
>> though there have been no rollbacks, updates, or deletes on this
>> table, the vacuum is writing as much as it is reading while dealing
>> with the TOAST data.
>
> Writing hint bits. Annoying isn't it? :-(

Surprising, anyway.  If it allows subsequent operations to be
faster, I'll take it; although to a naive user it's not clear what
is known at vacuum time that the INSERT into the empty table
couldn't have inferred.  Bulk loads into empty tables are a pretty
common use case, so if there was some way to set the hints on
insert, as long as the table started the database transaction
empty, nobody else is modifying it, and only inserts have occurred,
that would be a good thing.  I'm speaking from the perspective of a
user, of course; not someone who would actually try to wrangle the
code into working that way.

Thanks for the explanation.

-Kevin




Re: Heavy write activity on first vacuum of fresh TOAST data

From
Tom Lane
Date:
"Kevin Grittner" <Kevin.Grittner@wicourts.gov> writes:
> ... although to a naive user it's not clear what
> is known at vacuum time that the INSERT into the empty table
> couldn't have inferred.

The fact that the INSERT actually committed.

            regards, tom lane

Re: Heavy write activity on first vacuum of fresh TOAST data

From
"Kevin Grittner"
Date:
>>> On Thu, Dec 13, 2007 at 10:35 AM, in message <13267.1197563721@sss.pgh.pa.us>,
Tom Lane <tgl@sss.pgh.pa.us> wrote:
> "Kevin Grittner" <Kevin.Grittner@wicourts.gov> writes:
>> ... although to a naive user it's not clear what
>> is known at vacuum time that the INSERT into the empty table
>> couldn't have inferred.
>
> The fact that the INSERT actually committed.

Fair enough.  I suppose that the possibility that of access before
the commit would preclude any optimization that would assume the
commit is more likely than a rollback, and do the extra work only in
the unusual case?

-Kevin




Re: Heavy write activity on first vacuum of fresh TOAST data

From
Simon Riggs
Date:
On Thu, 2007-12-13 at 10:39 -0600, Kevin Grittner wrote:
> >>> On Thu, Dec 13, 2007 at 10:35 AM, in message <13267.1197563721@sss.pgh.pa.us>,
> Tom Lane <tgl@sss.pgh.pa.us> wrote:
> > "Kevin Grittner" <Kevin.Grittner@wicourts.gov> writes:
> >> ... although to a naive user it's not clear what
> >> is known at vacuum time that the INSERT into the empty table
> >> couldn't have inferred.
> >
> > The fact that the INSERT actually committed.
>
> Fair enough.  I suppose that the possibility that of access before
> the commit would preclude any optimization that would assume the
> commit is more likely than a rollback, and do the extra work only in
> the unusual case?

No chance. There's an optimization of COPY I've not got around to as
yet, but nothing straightforward we can do with the normal case.

We might be able to have bgwriter set hint bits on dirty blocks, but the
success of that would depend upon the transit time of blocks through the
cache, i.e. it might be totally ineffective. So might be just overhead
for the bgwriter and worse, could divert bgwriter attention away from
what its supposed to be doing. That's a lot of work to fiddle with the
knobs to improve things and there's higher things on the list AFAICS.

--
  Simon Riggs
  2ndQuadrant  http://www.2ndQuadrant.com


Re: Heavy write activity on first vacuum of fresh TOAST data

From
Alvaro Herrera
Date:
Simon Riggs wrote:

> We might be able to have bgwriter set hint bits on dirty blocks, but the
> success of that would depend upon the transit time of blocks through the
> cache, i.e. it might be totally ineffective. So might be just overhead
> for the bgwriter and worse, could divert bgwriter attention away from
> what its supposed to be doing. That's a lot of work to fiddle with the
> knobs to improve things and there's higher things on the list AFAICS.

I don't think that works, because the bgwriter has no access to the
catalogs, therefore it cannot examine the page contents.  To bgwriter,
pages are opaque.

--
Alvaro Herrera                  http://www.amazon.com/gp/registry/5ZYLFMCVHXC
"There is evil in the world. There are dark, awful things. Occasionally, we get
a glimpse of them. But there are dark corners; horrors almost impossible to
imagine... even in our worst nightmares." (Van Helsing, Dracula A.D. 1972)

Re: Heavy write activity on first vacuum of fresh TOAST data

From
"Kevin Grittner"
Date:
>>> On Thu, Dec 13, 2007 at 10:11 AM, in message
<1197562283.4255.1829.camel@ebony.site>, Simon Riggs <simon@2ndquadrant.com>
wrote:
> On Thu, 2007-12-13 at 09:46 -0600, Kevin Grittner wrote:
>
>> The data was inserted through a Java program using a prepared
>> statement with no indexes on the table.  The primary key was then
>> added, and now I've started a vacuum.  The new table wound up being
>> the first big table vacuumed, and I noticed something odd.  Even
>> though there have been no rollbacks, updates, or deletes on this
>> table, the vacuum is writing as much as it is reading while dealing
>> with the TOAST data.
>
> Writing hint bits. Annoying isn't it? :-(

Is there anything in the documentation that mentions this pattern
of activity?  Since I started clearing the WAL file tails before
compression, it has surprised me how much WAL file activity there
is from the nightly vacuum.  I had assumed that some part of this
was freezing old tuples, but that didn't seem to exactly match the
pattern of activity.  If the hint bit changes are written to the
WAL, I think this explains it.

Maybe this too arcane for the docs, but I'm not so sure.
Effectively, it means that every new tuple which has much of a
lifespan has to be written at least three times, if I'm
understanding you: once during the database transaction which
creates it, once in the first subsequent vacuum of that table to
flag that it was committed, and again when it reaches the freeze
threshold to prevent transaction number wraparound.

That last one could be sort of a surprise for someone at some
point after, say, restoring from pg_dump, couldn't it?  Would it
make any kind of sense for a person to do the first vacuum after
a bulk load using the FREEZE keyword (or the more recent
equivalent setting)?

-Kevin



Re: Heavy write activity on first vacuum of fresh TOAST data

From
Simon Riggs
Date:
On Thu, 2007-12-13 at 11:46 -0600, Kevin Grittner wrote:
> >>> On Thu, Dec 13, 2007 at 10:11 AM, in message
> <1197562283.4255.1829.camel@ebony.site>, Simon Riggs <simon@2ndquadrant.com>
> wrote:
> > On Thu, 2007-12-13 at 09:46 -0600, Kevin Grittner wrote:
> >
> >> The data was inserted through a Java program using a prepared
> >> statement with no indexes on the table.  The primary key was then
> >> added, and now I've started a vacuum.  The new table wound up being
> >> the first big table vacuumed, and I noticed something odd.  Even
> >> though there have been no rollbacks, updates, or deletes on this
> >> table, the vacuum is writing as much as it is reading while dealing
> >> with the TOAST data.
> >
> > Writing hint bits. Annoying isn't it? :-(
>
> Is there anything in the documentation that mentions this pattern
> of activity?  Since I started clearing the WAL file tails before
> compression, it has surprised me how much WAL file activity there
> is from the nightly vacuum.  I had assumed that some part of this
> was freezing old tuples, but that didn't seem to exactly match the
> pattern of activity.  If the hint bit changes are written to the
> WAL, I think this explains it.

They're not.

--
  Simon Riggs
  2ndQuadrant  http://www.2ndQuadrant.com


Re: Heavy write activity on first vacuum of fresh TOAST data

From
Tom Lane
Date:
Alvaro Herrera <alvherre@alvh.no-ip.org> writes:
> Simon Riggs wrote:
>> We might be able to have bgwriter set hint bits on dirty blocks,

> I don't think that works, because the bgwriter has no access to the
> catalogs, therefore it cannot examine the page contents.  To bgwriter,
> pages are opaque.

Another issue is that this'd require bgwriter to access the clog SLRU
area.  I seem to remember worrying that that could lead to low-level
deadlocks, though I cannot recall the exact case at the moment.
Even without that, it would increase contention for SLRU, which we
probably don't want.

            regards, tom lane

Re: Heavy write activity on first vacuum of fresh TOAST data

From
Simon Riggs
Date:
On Thu, 2007-12-13 at 13:52 -0500, Tom Lane wrote:
> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:
> > Simon Riggs wrote:
> >> We might be able to have bgwriter set hint bits on dirty blocks,
>
> > I don't think that works, because the bgwriter has no access to the
> > catalogs, therefore it cannot examine the page contents.  To bgwriter,
> > pages are opaque.
>
> Another issue is that this'd require bgwriter to access the clog SLRU
> area.  I seem to remember worrying that that could lead to low-level
> deadlocks, though I cannot recall the exact case at the moment.
> Even without that, it would increase contention for SLRU, which we
> probably don't want.

I was trying to highlight the problems, not advocate that as an
approach, sorry if I wasn't clear enough. Even if you solved the
problems both of you have mentioned I don't think the dynamic behaviour
will be useful enough to merit the effort of trying. I'm definitely not
going to be spending any time on this. Fish are frying elsewhere.

--
  Simon Riggs
  2ndQuadrant  http://www.2ndQuadrant.com


Re: Heavy write activity on first vacuum of fresh TOAST data

From
"Kevin Grittner"
Date:
>>> On Thu, Dec 13, 2007 at 12:12 PM, in message
<1197569564.4255.1855.camel@ebony.site>, Simon Riggs <simon@2ndquadrant.com>
wrote:
> On Thu, 2007-12-13 at 11:46 -0600, Kevin Grittner wrote:
>> If the hint bit changes are written to the WAL ...
>
> They're not.

So one would expect a write-intensive initial vacuum after a
PITR-style recovery?

What impact would lack of the hint bits have until a vacuum?

-Kevin




Re: Heavy write activity on first vacuum of fresh TOAST data

From
Simon Riggs
Date:
On Thu, 2007-12-13 at 15:19 -0600, Kevin Grittner wrote:
> >>> On Thu, Dec 13, 2007 at 12:12 PM, in message
> <1197569564.4255.1855.camel@ebony.site>, Simon Riggs <simon@2ndquadrant.com>
> wrote:
> > On Thu, 2007-12-13 at 11:46 -0600, Kevin Grittner wrote:
> >> If the hint bit changes are written to the WAL ...
> >
> > They're not.
>
> So one would expect a write-intensive initial vacuum after a
> PITR-style recovery?

Very perceptive. I was just thinking about that myself. An interesting
issue when running with full_page_writes off.

> What impact would lack of the hint bits have until a vacuum?

Vacuum isn't important here. Its the first idiot to read the data that
gets hit.

--
  Simon Riggs
  2ndQuadrant  http://www.2ndQuadrant.com


Re: Heavy write activity on first vacuum of fresh TOAST data

From
"Kevin Grittner"
Date:
>>> On Thu, Dec 13, 2007 at  3:40 PM, in message
<1197582010.4255.1912.camel@ebony.site>, Simon Riggs <simon@2ndquadrant.com>
wrote:
> On Thu, 2007-12-13 at 15:19 -0600, Kevin Grittner wrote:
>> >>> On Thu, Dec 13, 2007 at 12:12 PM, in message
>> <1197569564.4255.1855.camel@ebony.site>, Simon Riggs <simon@2ndquadrant.com>
>> wrote:
>> > On Thu, 2007-12-13 at 11:46 -0600, Kevin Grittner wrote:
>> >> If the hint bit changes are written to the WAL ...
>> >
>> > They're not.
>>
>> So one would expect a write-intensive initial vacuum after a
>> PITR-style recovery?
>
> Very perceptive. I was just thinking about that myself. An interesting
> issue when running with full_page_writes off.
>
>> What impact would lack of the hint bits have until a vacuum?
>
> Vacuum isn't important here. Its the first idiot to read the data that
> gets hit.

OK, I want to understand this well enough to recognize it when I
see it.  (As always, I appreciate the helpful answers here.)

Assuming no data is toasted, after a bulk INSERT or COPY into the
database, a subsequent SELECT COUNT(*) would figure out the correct
hint bits and rewrite all rows during execution of the SELECT
statement?

The same is true following a PITR-style recovery?

Toasted data would not be rewritten unless accessed (whether that
be for selection criteria, sort order, results, or whatever)?

A database VACUUM is going to run into every page not previously
accessed and make all hint bits correct?

Would a VACUUM FREEZE of a bulk-loaded table do one write for both
the hint bits and the transaction ID?  (I know that hackers
generally prefer that people leave the transaction IDs unfrozen
for a long time to aid in debugging problems, but that seems less
useful in a large table which has just been bulk-loaded, true?)

-Kevin



Re: Heavy write activity on first vacuum of fresh TOAST data

From
Russell Smith
Date:
Simon Riggs wrote:
> On Thu, 2007-12-13 at 15:19 -0600, Kevin Grittner wrote:
>
>> What impact would lack of the hint bits have until a vacuum?
>>
>
> Vacuum isn't important here. Its the first idiot to read the data that
> gets hit.
>
>
Given vacuum must then touch every page, is there a win in only setting
hint bits on pages where vacuum has to do some other work on the page?
As vacuum is causing significant IO load for data that may not be
accessed for some time.

The question becomes what is the impact of not setting hint bits?  Is it
better or worse than the IO caused by vacuum?

Regards

Russell Smith

Re: Heavy write activity on first vacuum of fresh TOAST data

From
Tom Lane
Date:
Russell Smith <mr-russ@pws.com.au> writes:
> Given vacuum must then touch every page, is there a win in only setting
> hint bits on pages where vacuum has to do some other work on the page?
> As vacuum is causing significant IO load for data that may not be
> accessed for some time.

Well, if vacuum doesn't do it then some other poor sod will have to.

My feeling is that vacuum's purpose in life is to offload maintenance
cycles from foreground queries, so we should be happy to have it setting
all the hint bits.  If Kevin doesn't like the resultant I/O load then he
should use the vacuum_cost_delay parameters to dial down vacuum speed.

            regards, tom lane

Re: Heavy write activity on first vacuum of fresh TOAST data

From
"Kevin Grittner"
Date:
>>> On Fri, Dec 14, 2007 at  1:42 AM, in message <29185.1197618162@sss.pgh.pa.us>,
Tom Lane <tgl@sss.pgh.pa.us> wrote:

> My feeling is that vacuum's purpose in life is to offload maintenance
> cycles from foreground queries, so we should be happy to have it setting
> all the hint bits.

Absolutely.

> If Kevin doesn't like the resultant I/O load then he
> should use the vacuum_cost_delay parameters to dial down vacuum speed.

It's not that I don't like it -- I'm often called upon to diagnose
issues, and understanding the dynamics of things like this helps me
interpret what I'm seeing.  No complaint here.

-Kevin




Re: Heavy write activity on first vacuum of fresh TOASTa

From
Matthew
Date:
On Thu, 13 Dec 2007, Simon Riggs wrote:
> On Thu, 2007-12-13 at 09:46 -0600, Kevin Grittner wrote:
> > Even though there have been no rollbacks, updates, or deletes on this
> > table, the vacuum is writing as much as it is reading while dealing
> > with the TOAST data.
>
> Writing hint bits. Annoying isn't it? :-(

Interesting thread. Now, I know absolutely nothing about how the data is
stored, but it strikes me as being non-optimal that every single block on
the disc needs to be written again just to update some hint bits. Could
those bits be taken out into a separate bitmap stored somewhere else? That
would mean the (relatively small) amount of data being written could be
written in a small sequential write to the disc, rather than very sparsely
over the whole table.

Matthew

--
If you let your happiness depend upon how somebody else feels about you,
now you have to control how somebody else feels about you. -- Abraham Hicks

Re: Heavy write activity on first vacuum of fresh TOAST data

From
"Kevin Grittner"
Date:
>>> On Thu, Dec 13, 2007 at  3:40 PM, in message
<1197582010.4255.1912.camel@ebony.site>, Simon Riggs <simon@2ndquadrant.com>
wrote:
> On Thu, 2007-12-13 at 15:19 -0600, Kevin Grittner wrote:

>> So one would expect a write-intensive initial vacuum after a
>> PITR-style recovery?

> An interesting issue when running with full_page_writes off.

I'm curious.  How does the full_page_writes setting affect this?

-Kevin




Re: Heavy write activity on first vacuum of fresh TOASTa

From
Tom Lane
Date:
Matthew <matthew@flymine.org> writes:
> Interesting thread. Now, I know absolutely nothing about how the data is
> stored, but it strikes me as being non-optimal that every single block on
> the disc needs to be written again just to update some hint bits. Could
> those bits be taken out into a separate bitmap stored somewhere else?

You are trying to optimize the wrong thing.  Making vacuum cheaper at
the cost of making every tuple lookup more expensive is not going to
be a win.

            regards, tom lane

Re: Heavy write activity on first vacuum of fresh TOASTa

From
Matthew
Date:
On Fri, 14 Dec 2007, Tom Lane wrote:
> Matthew <matthew@flymine.org> writes:
> > Interesting thread. Now, I know absolutely nothing about how the data is
> > stored, but it strikes me as being non-optimal that every single block on
> > the disc needs to be written again just to update some hint bits. Could
> > those bits be taken out into a separate bitmap stored somewhere else?
>
> You are trying to optimize the wrong thing.  Making vacuum cheaper at
> the cost of making every tuple lookup more expensive is not going to
> be a win.

True, although it depends how much data there actually is. If it's one bit
per page, then you could possibly rely on that data staying in the cache
enough for it to stop being a performance hit. If it's much more data than
that, then yes it's silly, and I should be embarrassed at having made the
suggestion. No point making each random tuple lookup into two disc
accesses instead of one.

Matthew

--
It is better to keep your mouth closed and let people think you are a fool
than to open it and remove all doubt.                  -- Mark Twain