Thread: database contest results
Hi, I have not studied this contest in any detail. However the performance differences seem kind of unrealistic: http://www.mysql.com/news-and-events/press-release/release_2006_35.html regards, Lukas
Lukas Kahwe Smith schrieb: > Hi, > > I have not studied this contest in any detail. However the performance > differences seem kind of unrealistic: > http://www.mysql.com/news-and-events/press-release/release_2006_35.html Ups, sorry for reposting this. I must have overlooked it when I checked the archives yesterday. regards, Lukas
Lukas Kahwe Smith wrote: > Hi, > > I have not studied this contest in any detail. However the performance > differences seem kind of unrealistic: > http://www.mysql.com/news-and-events/press-release/release_2006_35.html > > regards, > Lukas > > > ---------------------------(end of broadcast)--------------------------- > TIP 4: Have you searched our list archives? > > http://archives.postgresql.org this is pure marketing. i have seen postgres beat mysql in many many cases. same with oracle. i assume that those tests are all done with ISAM. with ISAM everything is fast but you cannot reboot the box without facing serious corruption. in business applications stability is as least as important as speed. as far as the mysql benchmark is concerned: it is like one of those studies where professor marlboro says that smoking is good for health ... best regards, hans -- Cybertec Geschwinde & Schönig GmbH Schöngrabern 134; A-2020 Hollabrunn Tel: +43/1/205 10 35 / 340 www.postgresql.at, www.cybertec.at
Hans-Juergen Schoenig wrote: > Lukas Kahwe Smith wrote: >> Hi, >> >> I have not studied this contest in any detail. However the >> performance differences seem kind of unrealistic: >> http://www.mysql.com/news-and-events/press-release/release_2006_35.html >> >> regards, >> Lukas >> >> >> ---------------------------(end of broadcast)--------------------------- >> TIP 4: Have you searched our list archives? >> >> http://archives.postgresql.org > > > this is pure marketing. > i have seen postgres beat mysql in many many cases. same with oracle. > i assume that those tests are all done with ISAM. with ISAM everything > is fast but you cannot reboot the box without facing serious > corruption. in business applications stability is as least as > important as speed. > > as far as the mysql benchmark is concerned: it is like one of those > studies where professor marlboro says that smoking is good for health ... This was done by c't, not mysql. But the article title is misleading: it doesn't compare database systems, but application tuning. Unfortunately, the prerequisites where quite mysql-drawn right from the start. The MySQL team did a very good job tuning the existent application to access the database as rare as possible using memcache, and dropped all constraints (no surprise...). Porting _that_ optimized app to pgsql would be really interesting and comparable. Regards, Andreas
> this is pure marketing. > i have seen postgres beat mysql in many many cases. same with oracle. I don't care if they beat us. What is ridiculous is what they said they beat us by. Joshua D. Drake > i assume that those tests are all done with ISAM. with ISAM everything > is fast but you cannot reboot the box without facing serious corruption. > in business applications stability is as least as important as speed. > > as far as the mysql benchmark is concerned: it is like one of those > studies where professor marlboro says that smoking is good for health ... > > best regards, > > hans > > -- === The PostgreSQL Company: Command Prompt, Inc. === Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240 Providing the most comprehensive PostgreSQL solutions since 1997 http://www.commandprompt.com/
Andreas Pflug wrote: > Hans-Juergen Schoenig wrote: > >> Lukas Kahwe Smith wrote: >> >>> Hi, >>> >>> I have not studied this contest in any detail. However the >>> performance differences seem kind of unrealistic: >>> http://www.mysql.com/news-and-events/press-release/release_2006_35.html >>> >> this is pure marketing. >> i have seen postgres beat mysql in many many cases. same with oracle. >> i assume that those tests are all done with ISAM. with ISAM everything >> is fast but you cannot reboot the box without facing serious >> corruption. in business applications stability is as least as >> important as speed. >> > the prerequisites where quite mysql-drawn right from the start. The > MySQL team did a very good job tuning the existent application to access > the database as rare as possible using memcache, and dropped all > constraints (no surprise...). Porting _that_ optimized app to pgsql > would be really interesting and comparable. > Good Point :-)) Does it mean that a faster way working with mysql is by using mysql as little as possible? Probably thanthe fastest way is never using mysql? SCNR Anastasios
On Tue, 2006-08-29 at 12:32 +0200, Hans-Juergen Schoenig wrote: > this is pure marketing. > i have seen postgres beat mysql in many many cases. same with oracle. > i assume that those tests are all done with ISAM. with ISAM everything > is fast but you cannot reboot the box without facing serious corruption. > in business applications stability is as least as important as speed. > Using MyISAM alone cannot explain those differences. Supposedly it was 3000 opm versus 120 opm. Clearly, there is a huge difference in the overall application. The PostgreSQL entry was done quickly, and the author probably didn't understand the terms of the contest entirely, let alone have the time to optimize his entry. Regards, Jeff Davis
Jeff Davis wrote: > Clearly, there is a huge difference in the overall application. The > PostgreSQL entry was done quickly, and the author probably didn't > understand the terms of the contest entirely, let alone have the time > to optimize his entry. I don't think you should make these kinds of insulting judgements without research. -- Peter Eisentraut http://developer.postgresql.org/~petere/
On Tue, 2006-08-29 at 19:20 +0200, Peter Eisentraut wrote: > Jeff Davis wrote: > > Clearly, there is a huge difference in the overall application. The > > PostgreSQL entry was done quickly, and the author probably didn't > > understand the terms of the contest entirely, let alone have the time > > to optimize his entry. > > I don't think you should make these kinds of insulting judgements > without research. > The author himself said he didn't have time. I didn't mean to be insulting, and I apologize if I was. 120 versus 3000 seems like the MySQL entry guys were operating with an entirely separate set of assumptions, and spent much more time optimizing it and determining the exact contest requirements. Regards, Jeff Davis
Jeff Davis wrote: > On Tue, 2006-08-29 at 19:20 +0200, Peter Eisentraut wrote: >> Jeff Davis wrote: >>> Clearly, there is a huge difference in the overall application. The >>> PostgreSQL entry was done quickly, and the author probably didn't >>> understand the terms of the contest entirely, let alone have the time >>> to optimize his entry. >> I don't think you should make these kinds of insulting judgements >> without research. >> > > The author himself said he didn't have time. I didn't mean to be > insulting, and I apologize if I was. 120 versus 3000 seems like the > MySQL entry guys were operating with an entirely separate set of > assumptions, and spent much more time optimizing it and determining the > exact contest requirements. I didn't find it insulting. Sincerely, Joshua D. Drake > > Regards, > Jeff Davis > > > > ---------------------------(end of broadcast)--------------------------- > TIP 1: if posting/reading through Usenet, please send an appropriate > subscribe-nomail command to majordomo@postgresql.org so that your > message can get through to the mailing list cleanly > -- === The PostgreSQL Company: Command Prompt, Inc. === Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240 Providing the most comprehensive PostgreSQL solutions since 1997 http://www.commandprompt.com/
On Tue, 2006-08-29 at 10:55 -0700, Jeff Davis wrote: > On Tue, 2006-08-29 at 19:20 +0200, Peter Eisentraut wrote: > > Jeff Davis wrote: > > > Clearly, there is a huge difference in the overall application. The > > > PostgreSQL entry was done quickly, and the author probably didn't > > > understand the terms of the contest entirely, let alone have the time > > > to optimize his entry. > > > > I don't think you should make these kinds of insulting judgements > > without research. > > > > The author himself said he didn't have time. I didn't mean to be > insulting, and I apologize if I was. 120 versus 3000 seems like the > MySQL entry guys were operating with an entirely separate set of > assumptions, and spent much more time optimizing it and determining the > exact contest requirements. > After re-reading my original post, it came out differently than I intended, and I apologize to the author. My point was simply that the author did not spend as much time as the MySQL entry participants (I'm not criticizing the author, but just paraphrasing his words). I think that the time the MySQL entry participants spent preparing, which included the time they spent understanding the bounds of the contest, had much more to do with the end result than any technical details. We can't have a perfect entry for every contest, so I don't think it's a big deal. If someone sees a contest in the future, they should post it on -advocacy when they see it, so that PostgreSQL people have more time to prepare a potential entry. Then, we can have a few extra "wins" among the crowd that reads those articles. Regards, Jeff Davis
Jeff Davis wrote: > On Tue, 2006-08-29 at 19:20 +0200, Peter Eisentraut wrote: > >> Jeff Davis wrote: >> >>> Clearly, there is a huge difference in the overall application. The >>> PostgreSQL entry was done quickly, and the author probably didn't >>> understand the terms of the contest entirely, let alone have the time >>> to optimize his entry. >>> >> I don't think you should make these kinds of insulting judgements >> without research. >> >> > > The author himself said he didn't have time. I didn't mean to be > insulting, and I apologize if I was. 120 versus 3000 seems like the > MySQL entry guys were operating with an entirely separate set of > assumptions, and spent much more time optimizing it and determining the > exact contest requirements. > Maybe you should have had a look at the article before speculating. Contest requirement was very easy: take the DS sample and make it fast on a given average PC hardware. The MySQL guys were able to take a working system, others had to write a new db access or even code a new one. MySQL had a whole (paid? full-time?) team on that, with half the work already done. The contest was a wrong labeled app optimization contest. Declaring it as db comparison and using it as marketing stuff is dubious and highly misleading. Regards, Andreas
On Wed, 2006-08-30 at 00:10 +0200, Andreas Pflug wrote: > > The author himself said he didn't have time. I didn't mean to be > > insulting, and I apologize if I was. 120 versus 3000 seems like the > > MySQL entry guys were operating with an entirely separate set of > > assumptions, and spent much more time optimizing it and determining the > > exact contest requirements. > > > Maybe you should have had a look at the article before speculating. > Contest requirement was very easy: take the DS sample and make it fast > on a given average PC hardware. The MySQL guys were able to take a Before I posted, I read the English press release along with the thread on this list and on pgsql-general, but I don't read German (I only found the English translation now). I also read your statement in this thread that the MySQL guys tuned the application "to access the database as rare [sic] as possible using memcache." To me, this fact alone means that the author of the PostgreSQL entry operated under different assumptions than the author of the MySQL entry. Even "simple" contest requirements can be interpreted differently due to assumptions. For instance, maybe the author of the PostgreSQL entry made the wrong assumptions because, as you put it, the contest was "wrong labeled app optimization"? I stand by my original statement that it was more about understanding and adapting to the contest than anything to do with the technical database details (like storage engines). Regards, Jeff Davis
Jeff Davis wrote: > On Wed, 2006-08-30 at 00:10 +0200, Andreas Pflug wrote: > >>> The author himself said he didn't have time. I didn't mean to be >>> insulting, and I apologize if I was. 120 versus 3000 seems like the >>> MySQL entry guys were operating with an entirely separate set of >>> assumptions, and spent much more time optimizing it and determining the >>> exact contest requirements. >>> >>> >> Maybe you should have had a look at the article before speculating. >> Contest requirement was very easy: take the DS sample and make it fast >> on a given average PC hardware. The MySQL guys were able to take a >> > > Before I posted, I read the English press release along with the thread > on this list and on pgsql-general, but I don't read German (I only found > the English translation now). I also read your statement in this thread > that the MySQL guys tuned the application "to access the database as > rare [sic] as possible using memcache." > > To me, this fact alone means that the author of the PostgreSQL entry > operated under different assumptions than the author of the MySQL entry. > Even "simple" contest requirements can be interpreted differently due to > assumptions. For instance, maybe the author of the PostgreSQL entry made > the wrong assumptions because, as you put it, the contest was "wrong > labeled app optimization"? > > I stand by my original statement that it was more about understanding > and adapting to the contest than anything to do with the technical > database details (like storage engines). > The MySQL guys could skip the db adaption completely, and procede to advanced app side cache techniques. This distorted prerequisites, not assumptions. It was clear that everybody could use any technique, only the user interface was fixed. I'm not sure c't tested whether data was made persistent at all, so a pure in-memory fake maybe would have worked either (one could argue that using a db without constraints is not far from that) Regards, Andreas
Jeff Davis wrote: >On Wed, 2006-08-30 at 00:10 +0200, Andreas Pflug wrote: > > > > Guys, what I need is alternative benchmark data, where postgresql comes out better on standard assumptions, etc. than i this example Can this be done? Michael -- No virus found in this outgoing message. Checked by AVG Free Edition. Version: 7.1.405 / Virus Database: 268.11.7/432 - Release Date: 8/29/2006
Michael, > Guys, what I need is alternative benchmark data, where postgresql comes > out better on standard assumptions, etc. than i this example Can this > be done? Sure, we perform better on DBT2 and other ACID-transaction intensive tests. Go for it. -- Josh Berkus PostgreSQL @ Sun San Francisco
mdean wrote: > Jeff Davis wrote: > >> On Wed, 2006-08-30 at 00:10 +0200, Andreas Pflug wrote: >> >> >> >> > > Guys, what I need is alternative benchmark data, where postgresql comes > out better on standard assumptions, etc. than i this example Can this > be done? Sure, run dbt2 or dbt3 or dbt4 > Michael > > > -- === The PostgreSQL Company: Command Prompt, Inc. === Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240 Providing the most comprehensive PostgreSQL solutions since 1997 http://www.commandprompt.com/
pgsql@j-davis.com (Jeff Davis) writes: > On Wed, 2006-08-30 at 00:10 +0200, Andreas Pflug wrote: >> > The author himself said he didn't have time. I didn't mean to be >> > insulting, and I apologize if I was. 120 versus 3000 seems like the >> > MySQL entry guys were operating with an entirely separate set of >> > assumptions, and spent much more time optimizing it and determining the >> > exact contest requirements. >> > >> Maybe you should have had a look at the article before speculating. >> Contest requirement was very easy: take the DS sample and make it fast >> on a given average PC hardware. The MySQL guys were able to take a > > Before I posted, I read the English press release along with the thread > on this list and on pgsql-general, but I don't read German (I only found > the English translation now). I also read your statement in this thread > that the MySQL guys tuned the application "to access the database as > rare [sic] as possible using memcache." > > To me, this fact alone means that the author of the PostgreSQL entry > operated under different assumptions than the author of the MySQL entry. > Even "simple" contest requirements can be interpreted differently due to > assumptions. For instance, maybe the author of the PostgreSQL entry made > the wrong assumptions because, as you put it, the contest was "wrong > labeled app optimization"? > > I stand by my original statement that it was more about understanding > and adapting to the contest than anything to do with the technical > database details (like storage engines). I wonder if throwing in pgmemcache could have had some similar effects on a PostgreSQL-based system... -- output = reverse("gro.gultn" "@" "enworbbc") http://www3.sympatico.ca/cbbrowne/x.html This is Linux country. On a quiet night, you can hear NT re-boot.
Chris Browne wrote: > pgsql@j-davis.com (Jeff Davis) writes: > > >> On Wed, 2006-08-30 at 00:10 +0200, Andreas Pflug wrote: >> >>>> The author himself said he didn't have time. I didn't mean to be >>>> insulting, and I apologize if I was. 120 versus 3000 seems like the >>>> MySQL entry guys were operating with an entirely separate set of >>>> assumptions, and spent much more time optimizing it and determining the >>>> exact contest requirements. >>>> >>>> >>> Maybe you should have had a look at the article before speculating. >>> Contest requirement was very easy: take the DS sample and make it fast >>> on a given average PC hardware. The MySQL guys were able to take a >>> >> Before I posted, I read the English press release along with the thread >> on this list and on pgsql-general, but I don't read German (I only found >> the English translation now). I also read your statement in this thread >> that the MySQL guys tuned the application "to access the database as >> rare [sic] as possible using memcache." >> >> To me, this fact alone means that the author of the PostgreSQL entry >> operated under different assumptions than the author of the MySQL entry. >> Even "simple" contest requirements can be interpreted differently due to >> assumptions. For instance, maybe the author of the PostgreSQL entry made >> the wrong assumptions because, as you put it, the contest was "wrong >> labeled app optimization"? >> >> I stand by my original statement that it was more about understanding >> and adapting to the contest than anything to do with the technical >> database details (like storage engines). >> > > I wonder if throwing in pgmemcache could have had some similar effects > on a PostgreSQL-based system... > Certainly. I already proposed to use that very latest tuned app and port it to pgsql, which is just standard porting using PHP. This would make the app a truely comparable. Regards, Andreas
mdean wrote: > Jeff Davis wrote: > >> On Wed, 2006-08-30 at 00:10 +0200, Andreas Pflug wrote: >> >> >> >> > > Guys, what I need is alternative benchmark data, where postgresql > comes out better on standard assumptions, etc. than i this example > Can this be done? > Michael > > > I think it can be done, it's just a lot of work. At a minimum, I'd say that: 1) Reliability of final configuration needs to be tested with a "pull the plug" test, probably repeated a number of times. If the database doesn't survive this then it is disqualified as not meeting minimum requirements. 2) Multi-threaded access to the database is needed- at least dozens of threads doing asynchronous changes (updates, inserts, deletes, selects) to the database using transactions. Non-transactional databases need not apply. 3) The database needs to be tuned by people who actually know how to tune the database. This is the classic "Postgres is slow!" mistake- they run a default configuration. This also means the clients run on a different machine, and the specs of the database machine are a) reasonable, and b) known to the tuners, so they can actually use the capabilities of the machine. At this point, I'd say a reasonable lower bound would be a 64-bit CPU, at least 4G of memory, and 6-8 SATA drives in a RAID 1+0 configuration. Note that in any sort of "real" environment, which includes a small webserver app that I actually care about, these requirements will reflect reality. Sooner or later the plug is going to get pulled on my database- power outage, the magic smoke being released from some hardware, something will happen which will make the database die uncleanly, and need to recover from it. And sooner or later I'm going to have more than 1 person accessing the database at a time- at which point transactions will be a lifesaver (as a side note, this is why I picked Postgres over Mysql). Even more so, when performance is most important is when I have lots of people hitting the DB simultaneously- my website just got slashdotted or what have you. And finally, no matter which database I end up picking, I'm going to put some time into learning that databases, including how to tune that database. The problem here is the cost- both in hardware (we're talking ~$3-4K for the DB server alone), but even more so in time. Time to set up the database, time for the knowledgable people to come out of the woodwork and help you configure the database (and possibly the application), time to unplug each database multiple times, etc. Not even a long weekend is enough time, you're looking at weeks, if not months. More time than most journalists are willing to invest just to write a "Database performance shoot out! Details inside!" article. Brian