On Fri, Mar 18, 2011 at 10:07 PM, Phoenix Kiula <phoenix.kiula@gmail.com> wrote:
> I have a large table but not as large as the kind of numbers that get
> discussed on this list. It has 125 million rows.
>
> REINDEXing the table takes half a day, and it's still not finished.
>
> To write this post I did "SELECT COUNT(*)", and here's the output -- so long!
>
> select count(*) from links;
> count
> -----------
> 125418191
> (1 row)
>
> Time: 1270405.373 ms
>
> That's 1270 seconds!
>
> I suppose the vaccuum analyze is not doing its job? As you can see
> from settings below, I have autovacuum set to ON, and there's also a
> cronjob every 10 hours to do a manual vacuum analyze on this table,
> which is largest.
>
> PG is version 8.2.9.
>
> Any thoughts on what I can do to improve performance!?
>
> Below are my settings.
>
>
>
> max_connections = 300
> shared_buffers = 500MB
> effective_cache_size = 1GB
> max_fsm_relations = 1500
> max_fsm_pages = 950000
>
> work_mem = 100MB
> temp_buffers = 4096
> authentication_timeout = 10s
> ssl = off
> checkpoint_warning = 3600
> random_page_cost = 1
>
> autovacuum = on
> autovacuum_vacuum_cost_delay = 20
>
> vacuum_cost_delay = 20
> vacuum_cost_limit = 600
>
> autovacuum_naptime = 10
> stats_start_collector = on
> stats_row_level = on
> autovacuum_vacuum_threshold = 75
> autovacuum_analyze_threshold = 25
> autovacuum_analyze_scale_factor = 0.02
> autovacuum_vacuum_scale_factor = 0.01
>
> wal_buffers = 64
> checkpoint_segments = 128
> checkpoint_timeout = 900
> fsync = on
> maintenance_work_mem = 512MB
how much memory do you have? you might want to consider raising
maintenance_work_mem to 1GB. Are other things going on in the
database while you are rebuilding your indexes? Is it possible you
are blocked waiting on a lock for a while?
How much index data is there? Can we see the table definition along
with create index statements?
merlin