Thread: Re: [HACKERS] book status
> application. Indeed, this book project is considerably more likely > to return tangible benefit to the Postgres group (in the form of new > users/contributors attracted to the project) than most other ways > people might be using Postgres to make money. Exactly! No need to apologize for attracting attention to PostgreSQL. > In short, you needn't offer the slightest apology for collecting the > book royalties personally. From what I've heard of the book-writing > biz, you're unlikely to get rich off it anyway :-( And if you do get rich, you can retire and spend all your time on enhancements to PostgreSQL :-)
I think a bit of explanation is required for this story: http://www.newsalert.com/bin/story?StoryId=CozDUWbKbytiXnZy&FQ=Linux&Nav=na-search-&StoryTitle=Linux Up until now, the MySQL people have been boasting performance as the product's great advantage. Now this contradicts thi sfor the first time. I believe it has to do with the test. Perhaps MySQL is faster when you just do one simple SELECT * FROM table, and that it has never really been tested in a real-life (or as close as possible) environment?
Kaare Rasmussen <kar@webline.dk> wrote: > I think a bit of explanation is required for this story: > > http://www.newsalert.com/bin/story?StoryId=CozDUWbKbytiXnZy&FQ=Linux&Nav=na-search-&StoryTitle=Linux > > Up until now, the MySQL people have been boasting performance as the > product's great advantage. Now this contradicts thi sfor the first time. I > believe it has to do with the test. Perhaps MySQL is faster when you just > do one simple SELECT * FROM table, and that it has never really been > tested in a real-life (or as close as possible) environment? I wouldn't say that this is exactly the first time we've heard about problems with MySQL's famed "speed". Take the Tim Perdue article that came out a while back: http://www.phpbuilder.com/columns/tim20000705.php3?page=1 The most interesting thing about my test results was to see how much of a load Postgres could withstand before givingany errors. In fact, Postgres seemed to scale 3 times higher than MySQL before giving any errors at all. MySQL beginscollapsing at about 40-50 concurrent connections, whereas Postgres handily scaled to 120 before balking. My guessis, that Postgres could have gone far past 120 connections with enough memory and CPU. On the surface, this can appear to be a huge win for Postgres, but if you look at the results in more detail, you'llsee that Postgres took up to 2-3 times longer to generate each page, so it needs to scale 2-3 times higher just tobreak even with MySQL. So in terms of max numbers of pages generated concurrently without giving errors, it's prettymuch a dead heat between the two databases. In terms of generating one page at a time, MySQL does it up to 2-3 timesfaster. As written, this not exactly slanted toward postgresql, but you could easily rephrase this as "MySQL is fast, but not under heavy load. When heavily loaded, it degrades much faster than Postgresql, and they're both roughly the same speed, despite the fact that postgresql is doing more (transaction processing, etc.)." This story has made slashdot: http://slashdot.org/article.pl?sid=00/08/14/2128237&mode=nested Some of the comments are interesting. One MySQL defender claims that the bottle neck in the benchmarks Great Bridge used is the ODBC drivers. It's possible that all the test really shows is that MySQL has a poor ODBC driver.
At 09:26 AM 8/15/00 +0200, Kaare Rasmussen wrote: >I think a bit of explanation is required for this story: > >http://www.newsalert.com/bin/story?StoryId=CozDUWbKbytiXnZy&FQ=Linux&Nav=na -search-&StoryTitle=Linux > >Up until now, the MySQL people have been boasting performance as the >product's great advantage. Now this contradicts thi sfor the first time. I >believe it has to do with the test. Perhaps MySQL is faster when you just >do one simple SELECT * FROM table, and that it has never really been >tested in a real-life (or as close as possible) environment? It's no secret that MySQL falls apart under load when there are inserts and updates in the mix. They do table-level locking. If you read various threads about "hints and tricks" in MySQL-land concerning performance in high-concurrency (i.e. web site) situations, there are all sorts of suggestions about periodically caching copies of tables for reading so readers don't get blocked, etc. The sickness lies in the fact that the folks writing these complex workarounds are still convinced that MySQL is the fastest, most efficient DB tool available, that the lack of transactions is making their system faster, and that the concurrency problems they see are no worse than are seen with "real" a RDBMS like Oracle or Postgres. The level of ignorance in the MySQL world is just stunning at times, mostly due to a lot of openly dishonest (IMO) claims and advocacy by the authors of MySQL, in their documentation, for instance. A significant percentage of MySQL users seem to take these statements as gospel and are offended when you suggest, for instance, that table-level locking isn't such a hot idea for a DB used to drive a popular website. At least now when they all shout "Slashdot's popular, and they use MySQL" we can answer, "yeah, but the Slashdot folks are the ones who paid for the integration of MySQL with the SleepyCat backend, and guess why?" And the Slashdot folks have been openly talking about rewriting their code to be more DB agnostic (I refuse to call MySQL an RDBMS) and about perhaps switching to Oracle in the future. Maybe tests like this and more user advocacy will convince them to consider Postgres! - Don Baccus, Portland OR <dhogaza@pacifier.com> Nature photos, on-line guides, Pacific Northwest Rare Bird Alert Serviceand other goodies at http://donb.photo.net.
> Some of the comments are interesting. One MySQL defender > claims that the bottle neck in the benchmarks Great Bridge > used is the ODBC drivers. It's possible that all the test > really shows is that MySQL has a poor ODBC driver. Nope. If it were due to the ODBC driver, then MySQL and PostgreSQL would not have had comparable performance in the 1-2 user case. The ODBC driver is a per-client interface so would have no role as the number of users goes up. The Postgres core group were as suprised as anyone with the test results. There was no effort to "cook the books" on the testing: afaik GB did the testing as part of *their* evaluation of whether PostgreSQL would be a viable product for their company. I believe that the tests were all run on the same system, and, especially given the results, they did go through and verify that the settings for each DB were reasonable. The AS3AP test is a *read only test*, which should have been MySQL's bread and butter according to their marketing literature. The shape of that curve shows that MySQL started wheezing at about 4 users, and tailed off rapidly after that point. The other guys barely made it out of the starting gate :) The thing that was the most fun about this (the PostgreSQL steering committee got a sneak preview of the results a couple of months ago) was that we have never made an effort to benchmark Postgres against other databases, so we had no quantitative measurement on how we were doing. And we are doing pretty good! - Thomas
On Tue, 15 Aug 2000, Thomas Lockhart wrote: > The AS3AP test is a *read only test*, which should have been MySQL's > bread and butter according to their marketing literature. The shape of > that curve shows that MySQL started wheezing at about 4 users, and > tailed off rapidly after that point. The other guys barely made it out > of the starting gate :) Ah, cool, that answers one of my previous questions ... and scary that we beat "the best database for read only apps" *grin*
On Tue, 15 Aug 2000, Don Baccus wrote: > It's no secret that MySQL falls apart under load when there are > inserts and updates in the mix. They do table-level locking. If you > read various threads about "hints and tricks" in MySQL-land concerning > performance in high-concurrency (i.e. web site) situations, there are > all sorts of suggestions about periodically caching copies of tables > for reading so readers don't get blocked, etc. Here's one you might like. I am aware of a site (not one I run, and I shouldn't give its name) which has a share feed (or several). This means that, every 15 minutes, they have to get a bunch of rows into a few tables in a real hurry. MySQL's table level locking causes them such trouble that they run two instances. No big surprises there, but here's the fun bit: they both point at the same datafiles. Their web code accesses a mysqld which was started with their --readonly and --no-locking flags, so that it never writes to the datafiles. And the share feed goes through a separate, writable database. Every now and then a query fails with an error like "Eek! The table changed under us." so they modified (or wrapped - I'm not sure) the DBI driver to retry a couple of times under such circumstances. The result: it works. An actually quite well (ie. a lot better than before). I believe (hope!) that they are using the breathing space to investigate alternative solutions. Matthew.