jao@geophile.com writes:
> I have this table and index:
> create table t(id int, hash int);
> create index idx_t on t(hash);
> The value of the hash column, which is indexed, is a pseudo-random
> number. I load the table and measure the time per insert.
> What I've observed is that inserts slow down as the table grows to
> 1,000,000 records. Observing the pg_stat* tables, I see that the data
> page reads per unit time stay steady, but that index page reads grow
> quickly, (shared_buffers was set to 2000).
Define "quickly" ... the expected behavior is that cost to insert into
a btree index grows roughly as log(N). Are you seeing anything worse
than that?
shared_buffers of 2000 is generally considered too small for high-volume
databases. Numbers like 10000-50000 are considered reasonable on modern
hardware. It's possible that you could go larger without too much
penalty if using the 8.1 buffer manager code, but I don't know if anyone
has benchmarked that systematically.
regards, tom lane