Re: SV: MySQL and PostgreSQL speed compare - Mailing list pgsql-general
From | Ron Chmara |
---|---|
Subject | Re: SV: MySQL and PostgreSQL speed compare |
Date | |
Msg-id | 3A4D3154.15C68BB9@opus1.com Whole thread Raw |
In response to | SV: MySQL and PostgreSQL speed compare ("Jarmo Paavilainen" <netletter@comder.com>) |
List | pgsql-general |
Jarmo Paavilainen wrote: > > Just curious, what kind of tables did you set up in MySQL? My > Ehh... there are more than one kind... I did not know. Still with > transactions on PostgreSQL (unsafe method?) MySQL was 2 times as fast as > PostgreSQL. I will check this out, and return to this list with the results. > ... > > tables, then we're just back at the same old "speed versus data > > integrity" flame war that has always been the base of the > > MySQL/Postgres comparison. > Its a question of a compromising between speed and "integrity". *I think* > PostgreSQL should and could lower their "integrity" a bit (15 times slower). The test data: > > > Inserts on MySQL : 0.71sec/1000 rows > > > Inserts on PostgreSQL: 10.78sec/1000 rows (15 times slower?) > > > Inserts on PostgreSQL*: 1.59sec/1000 rows (2 times slower?) It appears to me that the playing field, in this case, is also "benchmark optimized" for the lowest common denominator (simple insert). If this is all you are doing, very basic selects and inserts at high speed, from one user, mySQL will be a much better choice for you. I've found postgreSQL to be 3 to 5 times faster for my use, (with fsyncing!) because of the way I've used it... not because it will do a very simple select or insert as fast as mySQL, but because it can do very complex joins on foreign keys during selects efficiently, because it can use sub-selects, and because it scales better for web applications with 1000 active online users (32+ concurrent db operations at any given time). The net effect of this is that while single queries/inserts/updates are 1x or 2x faster with mySQL, my single postgreSQL statements are performing tasks that would take 3 to 100+ mySQL statements (and outside logic would be needed for interpreting the results before making the next mySQL query, further reducing the mySQL speed). Another way of putting it: If your application uses 1-3 tables, and they are only used once each per user task (a simple web guestbook), and there are only a few users, mySQL is usually the faster choice. If you application uses 30 tables, and each task works with ten or more tables, selecting from 5, then inserting into two, and updating 3 (an accounting or complex ordering and inventory application), with 25 concurrent users, postgreSQL code *can* be written to be much faster.... but it can also be much slower. Something that is interesting about this is that if you write your database code in an abstraction layer (for example, a php class, or perl DBI/DBD), that layer may need to assume that each database will only be doing basic tasks, and cannot always make use of the advanced features in a given database. So if your database interaction code is only written for very simple tasks, mySQL will almost always be faster. See: http://www.phpbuilder.com/columns/tim20000705.php3 (with a simple application, and few users, mySQL is 1-2x faster) http://www.phpbuilder.com/columns/tim20001112.php3 (with a complex application, and many users, postgreSQL is 2-100x faster, and mySQL crawls or breaks under heavy load) So a simple select/insert/update benchmark does not accurately show the speed differences of writing code for complex applications or many users. -Ronabop -- Personal: ron@opus1.com, 520-326-6109, http://www.opus1.com/ron/ Work: rchmara@pnsinc.com, 520-546-8993, http://www.pnsinc.com/ The opinions expressed in this email are not neccesarrily those of myself, my employers, or any of the other little voices in my head.
pgsql-general by date: