Not to offend, but since most of us are PG users, we're not all that
familiar with what the different tests in MySQL's sql-bench benchmark
do. So you won't get very far by saying "PG is slow on benchmark X, can
I make it faster?", because that doesn't include any of the information
we need in order to help.
Specifics would be nice, including at least the following:
1. Which specific test case(s) would you like to try to make faster?
What do the table schema look like, including indexes and constraints?
2. What strategy did you settle on for handling VACUUM and ANALYZE
during the test? Have you confirmed that you aren't suffering from
table bloat?
3. What are the actual results you got from the PG run in question?
4. What is the size of the data set referenced in the test run?
-- Mark Lewis
On Thu, 2006-09-21 at 07:52 -0700, yoav x wrote:
> Hi
>
> After upgrading DBI and DBD::Pg, this benchmark still picks MySQL as the winner (at least on Linux
> RH3 on a Dell 1875 server with 2 hyperthreaded 3.6GHz CPUs and 4GB RAM).
> I've applied the following parameters to postgres.conf:
>
> max_connections = 500
> shared_buffers = 3000
> work_mem = 100000
> effective_cache_size = 3000000000
>
> Most queries still perform slower than with MySQL.
> Is there anything else that can be tweaked or is this a limitation of PG or the benchmark?
>
> Thanks.
>
>
>
> __________________________________________________
> Do You Yahoo!?
> Tired of spam? Yahoo! Mail has the best spam protection around
> http://mail.yahoo.com
>
> ---------------------------(end of broadcast)---------------------------
> TIP 9: In versions below 8.0, the planner will ignore your desire to
> choose an index scan if your joining column's datatypes do not
> match