Thread: Vacuum and Large Objects

Vacuum and Large Objects

From
"Simon Windsor"
Date:

Hi

 

I am having problems recovering storage from a Postgres 9.05 database that is used to hold large XML blocks for a week, before they are archived off line.

 

The main tables are partitioned in daily partitions, and these are easy to manage, however the DB keeps growing despite using Vacuum (daily at 0700) and autovacuum (this does not seem to run, although the process is running). The system is insert only, and partitions are dropped when over 7 days of age.

 

I believe the issue lies with pg_largeobject, it is split between 88 files of approx. 1G each.

 

The Postgres settings are default, EXCEPT

 

grep ^[a-z] postgresql.conf

listen_addresses = '*'          # what IP address(es) to listen on;

port = 5432                             # (change requires restart)

max_connections = 1000                  # (change requires restart)

shared_buffers = 256MB                  # min 128kB

work_mem = 4MB                          # min 64kB

maintenance_work_mem = 256MB            # min 1MB

vacuum_cost_delay = 20ms                # 0-100 milliseconds

checkpoint_segments = 32                # in logfile segments, min 1, 16MB each

checkpoint_completion_target = 0.9      # checkpoint target duration, 0.0 - 1.0

checkpoint_warning = 60s                # 0 disables

archive_mode = off              # allows archiving to be done

constraint_exclusion = partition        # on, off, or partition

log_destination = 'stderr'              # Valid values are combinations of

logging_collector = on          # Enable capturing of stderr and csvlog

silent_mode = on                        # Run server silently.

log_checkpoints = on

log_line_prefix = '%t %d %u '                   # special values:

log_statement = 'none'                  # none, ddl, mod, all

track_activities = on

track_counts = on

autovacuum = on                 # Enable autovacuum subprocess?  'on'

log_autovacuum_min_duration = 250       # -1 disables, 0 logs all actions and

autovacuum_max_workers = 3              # max number of autovacuum subprocesses

autovacuum_naptime = 3min               # time between autovacuum runs

autovacuum_vacuum_threshold = 500       # min number of row updates before

autovacuum_analyze_threshold = 100      # min number of row updates before

autovacuum_vacuum_scale_factor = 0.1    # fraction of table size before vacuum

autovacuum_analyze_scale_factor = 0.05  # fraction of table size before analyze

autovacuum_vacuum_cost_delay = 5ms      # default vacuum cost delay for

autovacuum_vacuum_cost_limit = 200      # default vacuum cost limit for

statement_timeout = 0                   # in milliseconds, 0 is disabled

datestyle = 'iso, dmy'

lc_messages = 'en_GB.UTF-8'                     # locale for system error message

lc_monetary = 'en_GB.UTF-8'                     # locale for monetary formatting

lc_numeric = 'en_GB.UTF-8'                      # locale for number formatting

lc_time = 'en_GB.UTF-8'                         # locale for time formatting

default_text_search_config = 'pg_catalog.english'

 

Besides running VACUUM FULL  pg_largeobject;, is there a way I can get autovacuum to start and clear this up?

 

All the best

 

Simon

 

Simon Windsor

Eml: simon.windsor@cornfield.org.uk

Tel: 01454 617689

Mob: 07590 324560

 

“There is nothing in the world that some man cannot make a little worse and sell a little cheaper, and he who considers price only is that man's lawful prey.”

 

Re: Vacuum and Large Objects

From
Craig Ringer
Date:
On 12/15/2011 04:01 AM, Simon Windsor wrote:

Hi

 

I am having problems recovering storage from a Postgres 9.05 database that is used to hold large XML blocks for a week, before they are archived off line.

 

The main tables are partitioned in daily partitions, and these are easy to manage, however the DB keeps growing despite using Vacuum (daily at 0700) and autovacuum (this does not seem to run, although the process is running). The system is insert only, and partitions are dropped when over 7 days of age.

 

I believe the issue lies with pg_largeobject, it is split between 88 files of approx. 1G each.


Are you using pg_largeobject via the lo_ functions, or via the `lo' datatype?

If you're using it via the `lo' type then certain actions can IIRC cause large object leaks. Try vacuumlo.

http://www.postgresql.org/docs/current/static/vacuumlo.html

vacuumlo is **NOT** suitable for use on databases where you use the lo_ functions directly.

See also the `lo' module:

http://www.postgresql.org/docs/current/static/lo.html


If you're using the lo_ functions directly and still seeing excessive space consumption in pg_largeobject then beyond a CLUSTER or VACUUM FULL run I'm not sure what to advise.

--
Craig Ringer

Re: Vacuum and Large Objects

From
"Igor Neyman"
Date:
From: Simon Windsor [mailto:simon.windsor@cornfield.me.uk]
Sent: Wednesday, December 14, 2011 3:02 PM
To: pgsql-general@postgresql.org
Subject: Vacuum and Large Objects

Hi

I am having problems recovering storage from a Postgres 9.05 database that is used to hold large XML blocks for a week,
beforethey are archived off line. 

The main tables are partitioned in daily partitions, and these are easy to manage, however the DB keeps growing despite
usingVacuum (daily at 0700) and autovacuum (this does not seem to run, although the process is running). The system is
insertonly, and partitions are dropped when over 7 days of age. 

I believe the issue lies with pg_largeobject, it is split between 88 files of approx. 1G each.

The Postgres settings are default, EXCEPT

grep ^[a-z] postgresql.conf
listen_addresses = '*'          # what IP address(es) to listen on;
port = 5432                             # (change requires restart)
max_connections = 1000                  # (change requires restart)
shared_buffers = 256MB                  # min 128kB
work_mem = 4MB                          # min 64kB
maintenance_work_mem = 256MB            # min 1MB
vacuum_cost_delay = 20ms                # 0-100 milliseconds
checkpoint_segments = 32                # in logfile segments, min 1, 16MB each
checkpoint_completion_target = 0.9      # checkpoint target duration, 0.0 - 1.0
checkpoint_warning = 60s                # 0 disables
archive_mode = off              # allows archiving to be done
constraint_exclusion = partition        # on, off, or partition
log_destination = 'stderr'              # Valid values are combinations of
logging_collector = on          # Enable capturing of stderr and csvlog
silent_mode = on                        # Run server silently.
log_checkpoints = on
log_line_prefix = '%t %d %u '                   # special values:
log_statement = 'none'                  # none, ddl, mod, all
track_activities = on
track_counts = on
autovacuum = on                 # Enable autovacuum subprocess?  'on'
log_autovacuum_min_duration = 250       # -1 disables, 0 logs all actions and
autovacuum_max_workers = 3              # max number of autovacuum subprocesses
autovacuum_naptime = 3min               # time between autovacuum runs
autovacuum_vacuum_threshold = 500       # min number of row updates before
autovacuum_analyze_threshold = 100      # min number of row updates before
autovacuum_vacuum_scale_factor = 0.1    # fraction of table size before vacuum
autovacuum_analyze_scale_factor = 0.05  # fraction of table size before analyze
autovacuum_vacuum_cost_delay = 5ms      # default vacuum cost delay for
autovacuum_vacuum_cost_limit = 200      # default vacuum cost limit for
statement_timeout = 0                   # in milliseconds, 0 is disabled
datestyle = 'iso, dmy'
lc_messages = 'en_GB.UTF-8'                     # locale for system error message
lc_monetary = 'en_GB.UTF-8'                     # locale for monetary formatting
lc_numeric = 'en_GB.UTF-8'                      # locale for number formatting
lc_time = 'en_GB.UTF-8'                         # locale for time formatting
default_text_search_config = 'pg_catalog.english'

Besides running VACUUM FULL  pg_largeobject;, is there a way I can get autovacuum to start and clear this up?

All the best

Simon

Simon Windsor
Eml: simon.windsor@cornfield.org.uk
Tel: 01454 617689
Mob: 07590 324560

"There is nothing in the world that some man cannot make a little worse and sell a little cheaper, and he who considers
priceonly is that man's lawful prey." 

>
>

I might be a bit late in this discussion.
But I think, your problem is right here:

" running VACUUM FULL  pg_largeobject"

If you are running "VACUUM FULL ..." on the table, you should follow it with the "REINDEX TABLE ...", at least on PG
versionsprior to 9.0. 

Regards,
Igor Neyman

Re: Vacuum and Large Objects

From
Stefan Keller
Date:
Hi Igor
2011/12/16 Igor Neyman <ineyman@perceptron.com> wrote: > But I think,
your problem is right here:
>
> " running VACUUM FULL  pg_largeobject"
>
> If you are running "VACUUM FULL ..." on the table, you should follow it with the "REINDEX TABLE ...", at least on PG
versionsprior to 9.0. 

I'm pretty sure that VACUUM FULL builds new indexes. That's at least
of how I understand the docs, especially the first "tip" here
http://www.postgresql.org/docs/9.0/static/routine-vacuuming.html

Yours, Stefan


2011/12/16 Igor Neyman <ineyman@perceptron.com>:
> From: Simon Windsor [mailto:simon.windsor@cornfield.me.uk]
> Sent: Wednesday, December 14, 2011 3:02 PM
> To: pgsql-general@postgresql.org
> Subject: Vacuum and Large Objects
>
> Hi
>
> I am having problems recovering storage from a Postgres 9.05 database that is used to hold large XML blocks for a
week,before they are archived off line. 
>
> The main tables are partitioned in daily partitions, and these are easy to manage, however the DB keeps growing
despiteusing Vacuum (daily at 0700) and autovacuum (this does not seem to run, although the process is running). The
systemis insert only, and partitions are dropped when over 7 days of age. 
>
> I believe the issue lies with pg_largeobject, it is split between 88 files of approx. 1G each.
>
> The Postgres settings are default, EXCEPT
>
> grep ^[a-z] postgresql.conf
> listen_addresses = '*'          # what IP address(es) to listen on;
> port = 5432                             # (change requires restart)
> max_connections = 1000                  # (change requires restart)
> shared_buffers = 256MB                  # min 128kB
> work_mem = 4MB                          # min 64kB
> maintenance_work_mem = 256MB            # min 1MB
> vacuum_cost_delay = 20ms                # 0-100 milliseconds
> checkpoint_segments = 32                # in logfile segments, min 1, 16MB each
> checkpoint_completion_target = 0.9      # checkpoint target duration, 0.0 - 1.0
> checkpoint_warning = 60s                # 0 disables
> archive_mode = off              # allows archiving to be done
> constraint_exclusion = partition        # on, off, or partition
> log_destination = 'stderr'              # Valid values are combinations of
> logging_collector = on          # Enable capturing of stderr and csvlog
> silent_mode = on                        # Run server silently.
> log_checkpoints = on
> log_line_prefix = '%t %d %u '                   # special values:
> log_statement = 'none'                  # none, ddl, mod, all
> track_activities = on
> track_counts = on
> autovacuum = on                 # Enable autovacuum subprocess?  'on'
> log_autovacuum_min_duration = 250       # -1 disables, 0 logs all actions and
> autovacuum_max_workers = 3              # max number of autovacuum subprocesses
> autovacuum_naptime = 3min               # time between autovacuum runs
> autovacuum_vacuum_threshold = 500       # min number of row updates before
> autovacuum_analyze_threshold = 100      # min number of row updates before
> autovacuum_vacuum_scale_factor = 0.1    # fraction of table size before vacuum
> autovacuum_analyze_scale_factor = 0.05  # fraction of table size before analyze
> autovacuum_vacuum_cost_delay = 5ms      # default vacuum cost delay for
> autovacuum_vacuum_cost_limit = 200      # default vacuum cost limit for
> statement_timeout = 0                   # in milliseconds, 0 is disabled
> datestyle = 'iso, dmy'
> lc_messages = 'en_GB.UTF-8'                     # locale for system error message
> lc_monetary = 'en_GB.UTF-8'                     # locale for monetary formatting
> lc_numeric = 'en_GB.UTF-8'                      # locale for number formatting
> lc_time = 'en_GB.UTF-8'                         # locale for time formatting
> default_text_search_config = 'pg_catalog.english'
>
> Besides running VACUUM FULL  pg_largeobject;, is there a way I can get autovacuum to start and clear this up?
>
> All the best
>
> Simon
>
> Simon Windsor
> Eml: simon.windsor@cornfield.org.uk
> Tel: 01454 617689
> Mob: 07590 324560
>
> "There is nothing in the world that some man cannot make a little worse and sell a little cheaper, and he who
considersprice only is that man's lawful prey." 
>
>>
>>
>
> I might be a bit late in this discussion.
> But I think, your problem is right here:
>
> " running VACUUM FULL  pg_largeobject"
>
> If you are running "VACUUM FULL ..." on the table, you should follow it with the "REINDEX TABLE ...", at least on PG
versionsprior to 9.0. 
>
> Regards,
> Igor Neyman
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general

Re: Vacuum and Large Objects

From
Guillaume Lelarge
Date:
On Fri, 2012-01-06 at 07:12 +0100, Stefan Keller wrote:
> Hi Igor
> 2011/12/16 Igor Neyman <ineyman@perceptron.com> wrote: > But I think,
> your problem is right here:
> >
> > " running VACUUM FULL  pg_largeobject"
> >
> > If you are running "VACUUM FULL ..." on the table, you should follow it with the "REINDEX TABLE ...", at least on
PGversions prior to 9.0. 
>
> I'm pretty sure that VACUUM FULL builds new indexes. That's at least
> of how I understand the docs, especially the first "tip" here
> http://www.postgresql.org/docs/9.0/static/routine-vacuuming.html
>

Before 9.0, VACUUM FULL required a REINDEX afterwards if you want to
keep decent performances.

With 9.0, it is no longer required because the new VACUUM FULL doesn't
bloat the index anymore.

So, in a sense, you were both right :) The documentation you're
referring to is the 9.0 manual. And Igor specified that one need to
REINDEX after VACUUM FULL for any release prior to 9.0. Both right.


--
Guillaume
http://blog.guillaume.lelarge.info
http://www.dalibo.com
PostgreSQL Sessions #3: http://www.postgresql-sessions.org


Re: Vacuum and Large Objects

From
"Igor Neyman"
Date:
> -----Original Message-----
> From: Stefan Keller [mailto:sfkeller@gmail.com]
> Sent: Friday, January 06, 2012 1:12 AM
> To: Igor Neyman
> Cc: Simon Windsor; pgsql-general@postgresql.org
> Subject: Re: Vacuum and Large Objects
> 
> Hi Igor
> 2011/12/16 Igor Neyman <ineyman@perceptron.com> wrote: > But I think,
> your problem is right here:
> >
> > " running VACUUM FULL  pg_largeobject"
> >
> > If you are running "VACUUM FULL ..." on the table, you should follow
> it with the "REINDEX TABLE ...", at least on PG versions prior to 9.0.
> 
> I'm pretty sure that VACUUM FULL builds new indexes. That's at least of
> how I understand the docs, especially the first "tip" here
> http://www.postgresql.org/docs/9.0/static/routine-vacuuming.html
> 
> Yours, Stefan
> 
> 
> 2011/12/16 Igor Neyman <ineyman@perceptron.com>:
> > From: Simon Windsor [mailto:simon.windsor@cornfield.me.uk]
> > Sent: Wednesday, December 14, 2011 3:02 PM
> > To: pgsql-general@postgresql.org
> > Subject: Vacuum and Large Objects
> >
> > Hi
> >
> > I am having problems recovering storage from a Postgres 9.05 database
> that is used to hold large XML blocks for a week, before they are
> archived off line.
> >
> > The main tables are partitioned in daily partitions, and these are
> easy to manage, however the DB keeps growing despite using Vacuum
> (daily at 0700) and autovacuum (this does not seem to run, although the
> process is running). The system is insert only, and partitions are
> dropped when over 7 days of age.
> >
> > I believe the issue lies with pg_largeobject, it is split between 88
> files of approx. 1G each.
> >
> > The Postgres settings are default, EXCEPT
> >
> > grep ^[a-z] postgresql.conf
> > listen_addresses = '*'          # what IP address(es) to listen on;
> > port = 5432                             # (change requires restart)
> > max_connections = 1000                  # (change requires restart)
> > shared_buffers = 256MB                  # min 128kB work_mem =
> 4MB
> > # min 64kB maintenance_work_mem = 256MB            # min 1MB
> > vacuum_cost_delay = 20ms                # 0-100 milliseconds
> > checkpoint_segments = 32                # in logfile segments, min 1,
> > 16MB each checkpoint_completion_target = 0.9      # checkpoint target
> > duration, 0.0 - 1.0 checkpoint_warning = 60s                # 0
> > disables archive_mode = off              # allows archiving to be
> done
> > constraint_exclusion = partition        # on, off, or partition
> > log_destination = 'stderr'              # Valid values are
> > combinations of logging_collector = on          # Enable capturing of
> > stderr and csvlog silent_mode = on                        # Run
> server silently.
> > log_checkpoints = on
> > log_line_prefix = '%t %d %u '                   # special values:
> > log_statement = 'none'                  # none, ddl, mod, all
> > track_activities = on track_counts = on autovacuum =
> on
> > # Enable autovacuum subprocess?  'on'
> > log_autovacuum_min_duration = 250       # -1 disables, 0 logs all
> > actions and autovacuum_max_workers = 3              # max number of
> > autovacuum subprocesses autovacuum_naptime = 3min               #
> time
> > between autovacuum runs autovacuum_vacuum_threshold = 500       # min
> > number of row updates before autovacuum_analyze_threshold = 100
> #
> > min number of row updates before autovacuum_vacuum_scale_factor =
> 0.1
> > # fraction of table size before vacuum
> autovacuum_analyze_scale_factor
> > = 0.05  # fraction of table size before analyze
> > autovacuum_vacuum_cost_delay = 5ms      # default vacuum cost delay
> > for autovacuum_vacuum_cost_limit = 200      # default vacuum cost
> > limit for statement_timeout = 0                   # in milliseconds,
> 0 is disabled datestyle = 'iso, dmy'
> > lc_messages = 'en_GB.UTF-8'                     # locale for system
> > error message lc_monetary = 'en_GB.UTF-8'                     #
> locale
> > for monetary formatting lc_numeric = 'en_GB.UTF-
> 8'
> > # locale for number formatting lc_time = 'en_GB.UTF-
> 8'
> > # locale for time formatting default_text_search_config =
> 'pg_catalog.english'
> >
> > Besides running VACUUM FULL  pg_largeobject;, is there a way I can
> get autovacuum to start and clear this up?
> >
> > All the best
> >
> > Simon
> >
> > Simon Windsor
> > Eml: simon.windsor@cornfield.org.uk
> > Tel: 01454 617689
> > Mob: 07590 324560
> >
> > "There is nothing in the world that some man cannot make a little
> worse and sell a little cheaper, and he who considers price only is
> that man's lawful prey."
> >
> >>
> >>
> >
> > I might be a bit late in this discussion.
> > But I think, your problem is right here:
> >
> > " running VACUUM FULL  pg_largeobject"
> >
> > If you are running "VACUUM FULL ..." on the table, you should follow
> it with the "REINDEX TABLE ...", at least on PG versions prior to 9.0.
> >
> > Regards,
> > Igor Neyman
> >
> > --
> > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To
> > make changes to your subscription:
> > http://www.postgresql.org/mailpref/pgsql-general

Please, disregard my previous message.
I missed " Postgres 9.05 database" in original post.

Regards,
Igor Neyman

Re: Vacuum and Large Objects

From
"Igor Neyman"
Date:
Stefan,

> -----Original Message-----
> From: Stefan Keller [mailto:sfkeller@gmail.com]
> Sent: Friday, January 06, 2012 1:12 AM
> To: Igor Neyman
> Cc: Simon Windsor; pgsql-general@postgresql.org
> Subject: Re: Vacuum and Large Objects
> 
> Hi Igor
> 2011/12/16 Igor Neyman <ineyman@perceptron.com> wrote: > But I think,
> your problem is right here:
> >
> > " running VACUUM FULL  pg_largeobject"
> >
> > If you are running "VACUUM FULL ..." on the table, you should follow
> it with the "REINDEX TABLE ...", at least on PG versions prior to 9.0.
> 
> I'm pretty sure that VACUUM FULL builds new indexes. That's at least of
> how I understand the docs, especially the first "tip" here
> http://www.postgresql.org/docs/9.0/static/routine-vacuuming.html
> 
> Yours, Stefan
> 
> 
> 2011/12/16 Igor Neyman <ineyman@perceptron.com>:
> > From: Simon Windsor [mailto:simon.windsor@cornfield.me.uk]
> > Sent: Wednesday, December 14, 2011 3:02 PM
> > To: pgsql-general@postgresql.org
> > Subject: Vacuum and Large Objects
> >
> > Hi
> >
> > I am having problems recovering storage from a Postgres 9.05 database
> that is used to hold large XML blocks for a week, before they are
> archived off line.
> >
> > The main tables are partitioned in daily partitions, and these are
> easy to manage, however the DB keeps growing despite using Vacuum
> (daily at 0700) and autovacuum (this does not seem to run, although the
> process is running). The system is insert only, and partitions are
> dropped when over 7 days of age.
> >
> > I believe the issue lies with pg_largeobject, it is split between 88
> files of approx. 1G each.
> >
> > The Postgres settings are default, EXCEPT
> >
> > grep ^[a-z] postgresql.conf
> > listen_addresses = '*'          # what IP address(es) to listen on;
> > port = 5432                             # (change requires restart)
> > max_connections = 1000                  # (change requires restart)
> > shared_buffers = 256MB                  # min 128kB work_mem =
> 4MB
> > # min 64kB maintenance_work_mem = 256MB            # min 1MB
> > vacuum_cost_delay = 20ms                # 0-100 milliseconds
> > checkpoint_segments = 32                # in logfile segments, min 1,
> > 16MB each checkpoint_completion_target = 0.9      # checkpoint target
> > duration, 0.0 - 1.0 checkpoint_warning = 60s                # 0
> > disables archive_mode = off              # allows archiving to be
> done
> > constraint_exclusion = partition        # on, off, or partition
> > log_destination = 'stderr'              # Valid values are
> > combinations of logging_collector = on          # Enable capturing of
> > stderr and csvlog silent_mode = on                        # Run
> server silently.
> > log_checkpoints = on
> > log_line_prefix = '%t %d %u '                   # special values:
> > log_statement = 'none'                  # none, ddl, mod, all
> > track_activities = on track_counts = on autovacuum =
> on
> > # Enable autovacuum subprocess?  'on'
> > log_autovacuum_min_duration = 250       # -1 disables, 0 logs all
> > actions and autovacuum_max_workers = 3              # max number of
> > autovacuum subprocesses autovacuum_naptime = 3min               #
> time
> > between autovacuum runs autovacuum_vacuum_threshold = 500       # min
> > number of row updates before autovacuum_analyze_threshold = 100
> #
> > min number of row updates before autovacuum_vacuum_scale_factor =
> 0.1
> > # fraction of table size before vacuum
> autovacuum_analyze_scale_factor
> > = 0.05  # fraction of table size before analyze
> > autovacuum_vacuum_cost_delay = 5ms      # default vacuum cost delay
> > for autovacuum_vacuum_cost_limit = 200      # default vacuum cost
> > limit for statement_timeout = 0                   # in milliseconds,
> 0 is disabled datestyle = 'iso, dmy'
> > lc_messages = 'en_GB.UTF-8'                     # locale for system
> > error message lc_monetary = 'en_GB.UTF-8'                     #
> locale
> > for monetary formatting lc_numeric = 'en_GB.UTF-
> 8'
> > # locale for number formatting lc_time = 'en_GB.UTF-
> 8'
> > # locale for time formatting default_text_search_config =
> 'pg_catalog.english'
> >
> > Besides running VACUUM FULL  pg_largeobject;, is there a way I can
> get autovacuum to start and clear this up?
> >
> > All the best
> >
> > Simon
> >
> > Simon Windsor
> > Eml: simon.windsor@cornfield.org.uk
> > Tel: 01454 617689
> > Mob: 07590 324560
> >
> > "There is nothing in the world that some man cannot make a little
> worse and sell a little cheaper, and he who considers price only is
> that man's lawful prey."
> >
> >>
> >>
> >
> > I might be a bit late in this discussion.
> > But I think, your problem is right here:
> >
> > " running VACUUM FULL  pg_largeobject"
> >
> > If you are running "VACUUM FULL ..." on the table, you should follow
> it with the "REINDEX TABLE ...", at least on PG versions prior to 9.0.
> >
> > Regards,
> > Igor Neyman
> >
> > --
> > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To
> > make changes to your subscription:
> > http://www.postgresql.org/mailpref/pgsql-general

I qualified my reply with PG version specifics, since original poster didn't specify Postgres version he's using.

Regards,
Igor Neyman

Re: Vacuum and Large Objects

From
Radosław Smogura
Date:
On Fri, 06 Jan 2012 08:51:24 +0100, Guillaume Lelarge wrote:
> On Fri, 2012-01-06 at 07:12 +0100, Stefan Keller wrote:
>> Hi Igor
>> 2011/12/16 Igor Neyman <ineyman@perceptron.com> wrote: > But I
>> think,
>> your problem is right here:
>> >
>> > " running VACUUM FULL  pg_largeobject"
>> >
>> > If you are running "VACUUM FULL ..." on the table, you should
>> follow it with the "REINDEX TABLE ...", at least on PG versions prior
>> to 9.0.
>>
>> I'm pretty sure that VACUUM FULL builds new indexes. That's at least
>> of how I understand the docs, especially the first "tip" here
>> http://www.postgresql.org/docs/9.0/static/routine-vacuuming.html
>>
>
> Before 9.0, VACUUM FULL required a REINDEX afterwards if you want to
> keep decent performances.
>
> With 9.0, it is no longer required because the new VACUUM FULL
> doesn't
> bloat the index anymore.
>
> So, in a sense, you were both right :) The documentation you're
> referring to is the 9.0 manual. And Igor specified that one need to
> REINDEX after VACUUM FULL for any release prior to 9.0. Both right.
>
>
> --
> Guillaume
> http://blog.guillaume.lelarge.info
> http://www.dalibo.com
> PostgreSQL Sessions #3: http://www.postgresql-sessions.org

Not quite true. I have develop server "PostgreSQL 9.0.5, compiled by
Visual C++ build 1500, 64-bit", after clearing LO and vacumming
accessing db stats from pgadmin gives permission denied on rel xxxx and
in fact this file gets wrong permissions. REINDEX resolves problem.

Regards,
Radek