Dnia wtorek, 15 marca 2005 17:08, Scott Marlowe napisał:
> On Tue, 2005-03-15 at 02:59, Marcin Giedz wrote:
> > Dnia poniedziałek, 14 marca 2005 19:32, Scott Marlowe napisał:
> > > On Mon, 2005-03-14 at 12:03, Marcin Giedz wrote:
> > > > Hello...
> > > >
> > > >
> > > > Our company is going to change SQL engine from MySQL to PSQL. Of
> > > > course some performance problems occured. Our server is Dual Xeon
> > > > 3.0GHz + 8GB RAM + RAID1(software - two 146GB SCSI 15k) for sql data
> > > > + RAID1(software - two 146GB SCSI 15k) for pg_xlog. Postgres.conf
> > > > parameters are as follows:
> > > >
> > > > max_connections = 150
> > > > shared_buffers = 50000 # min 16, at least max_connections*2,
> > > > 8KB each work_mem = 2048 # min 64, size in KB
> > >
> > > 50,000 shared buffers may or may not be too much. Try it at different
> > > sizes from 5,000 or so up to 50,000 and find the "knee". It's usually
> > > closer to 10,000 than 50,000, but ymmv...
> >
> > Playing with shared_buffers from 10000 to 50000 doesn't change anything
> > in total time for this query :( But when I change work_mem a little
> > higher to 10000 total runtime decreases a little about 10% but when I
> > change random_page_cost to 0.2 (I know that almost all papers say it
> > should be higher then 1.0) total runtime decreases almost 3 times and
> > lasts about 900ms - earlier with random_page_cost=1.2 it took 2.7s. Is it
> > possible to have random_page_cost on this value?
>
> IF random_page_cost needs to be that low, then it's likely that the
> query planner is either getting bad statistics and making a poor
> decision, or that you've got a corner case that it just can't figure
> out. What does explain analyze <yourqueryhere> say with
> random_page_cost set to 1.2 and 0.2? HAve you run analyze and vacuumed
> full lately?
It cann't be possible - I've run vacuum full analyze - it didn't change
anything ;)
Marcin