Re: Re: PostgreSQL vs MySQL - Mailing list pgsql-general

From The Hermit Hacker
Subject Re: Re: PostgreSQL vs MySQL
Date
Msg-id Pine.BSF.4.21.0007201422070.1745-100000@thelab.hub.org
Whole thread Raw
In response to Re: PostgreSQL vs MySQL  (Dimitri <dimitri@france.sun.com>)
List pgsql-general
I'd be interested in seeing what sort of results PgSQL v7.x gives, as it
appears, from the date of your test, you are using v6.5.3?

On Thu, 20 Jul 2000, Dimitri wrote:

> Hi all!
>
> Just my .5c to discussion about MySQL vs PgSQL...
>
> 1. no war, please :)
> 2. last year during same kind discussion I've ported a db_STRESS test to
> MySQL and PostgreSQL
>    and start on the same Sun machine. I was curious to compare table vs
> row?/other locking.
>         As we see often: 1. Idea, 2. Realisation... but whithout well
> done 2. the 1. is just an Idea...
>         The results of this test I've sent to Monthy and Bruce and hope
> it was interesting for them
>         and their team...
> 3. both products are great and they are DIFFERENT, that's why you have a
> choice :))
>
> Best regards,
> -Dimitri
>
>
>
> *******************************
> *********** db_STRESS
> ***
>
> Oct. 1999
>
>
> This test I've called db_STRESS - it's exactly what I mean :)
>
> Database Schema
> ===============
>
> -- ============================================================
> --   Table : ZONE
> -- ============================================================
> create table ZONE
> (
>     REF                 CHAR(2)            not null,
>     NAME                CHAR(40)           not null
> );
>
>
> -- ============================================================
> --   Table : SECTION
> -- ============================================================
> create table SECTION
> (
>     REF                 CHAR(2)            not null,
>     REF_ZONE            CHAR(2)            not null,
>     NAME                CHAR(40)           not null
> );
>
>
> -- ============================================================
> --   Table : OBJECT
> -- ============================================================
> create table OBJECT
> (
>     REF                 CHAR(10)              not null,
>     REF_SECTION         CHAR(2)               not null,
>     NAME                CHAR(30)              not null,
>     CREATE_DATE         CHAR(12)              not null,
>     NOTE                CHAR(100)
> );
>
> -- ============================================================
> --   Table : HISTORY
> -- ============================================================
> create table HISTORY
> (
>     REF_OBJECT          CHAR(10)              not null,
>     HORDER              INT                   not null,
>     REF_STAT            CHAR(3)               not null,
>     BEGIN_DATE          CHAR(12)              not null,
>     END_DATE            CHAR(12)                      ,
>     NOTE                CHAR(100)
> );
>
>
> -- ============================================================
> --   Index :
> -- ============================================================
>
> create unique index zone_ref_idx on ZONE( ref );
> create unique index stat_ref_idx on STAT( ref );
> create unique index section_ref_idx on SECTION( ref );
> create unique index object_ref_idx on OBJECT( ref );
> create unique index history_ref_idx on HISTORY( ref_object, horder );
>
>
> So we have relations: OBJECT ===> SECTION ===> ZONE
> and OBJECT ===> HISTORY
>
> For each record of OBJECT there are 20 records of HISTORY.
>
>
> Client Programm
> ================
>
> According to input options client programm will start two kind of
> transactions to database server:
>
>         READ - read randomly OBJECT whole information by OBJECT
> reference
>
>         WRITE - delete + insert + modify HISTORY record of random OBJECT
> by
> reference
>
> Input options:
>         time of test in sec. - 180 in my case
>         timeout interval between two transactions - 1 sec in my case
>         number of READ for one WRITE - 1 (50%), 3 (25%), 20 (5%) and
> 1000 (0%)
> in my case
>
>
> db_STRESS Description
> =====================
> start 1, 5, 10, 20, 40, 80, 160, 320, 640 client programms with:
>   1, 3, 20, and 1000 READs for one WRITE
>
>
> During the test each client programm log time of each transaction, at
> the end of test a short report is generated about of total number of
> transactions (ALL, READ, WRITE), Avg time of transaction (ALL, READ,
> WRITE), etc.
>
>
> Run Conditions
> ==============
> So, this test is running on Enterprise 4500, 12CPU 400Mhz 4Mb cache,
> 12Gb RAM.
> OS - Solaris 2.6
> Database has 50.000 OBJECTs, so 1.000.000 records of HISTORY.
> Due large RAM all data are cached by file system during test, so no disk
>
> I/O are performed and ONLY database server architecture/realisation is
> tested...
>
> MySQL
> =====
> + multithreaded architecture, so very small task overhead, should be
> well scalled
> ? locks between internal data, etc.
> - table locking for non-SELECT query
>
> PostgreSQL
> ==========
> + transaction isolation, so no locks between readers and writers
> ? realisation of internal locks, etc.
>
>
>
> > Why we do not use 'INSERT DELAYED' for MySQL?
>
> We have to be sure each transaction is REALLY finished.
>
>
> > It would be nice if you also could run a tests with reading/writing to
>
> > a lot of different tables (In this case database will scale much
> better)
>
> Any database will scale better if you use more tables :)
> But usually if you decided to use database server is to manage the
> concurent access to data...
>
> That's why it'll be very interesting to compare table locking and row
> locking...
>
>
>
> > Avg total transaction time should not grow according to the number of
> > clients.  In the worst case things could get serialized and not
> > utilize all CPU:s, but this should not affect the transaction time.
>
> Not in case of MySQL, because one INSERT will stop all SELECTs, so
> transaction time will grow according to the number of clients
>
> Transaction time I mean a time of single transaction (READ or WRITE)...
>
>
>
> Options used (different from default) for:
>         MySQL:  --skip-locking
>         PostgreSQL: -o -F
>
> we will try to go as fast as possible
>
> --low-priority-updates for MySQL is not use - grow WRITE transaction
> time and lock clients...
>
>
> And NOW final results! :))
> Use them as you want and be free to present them for your
> developpers/users/etc....
>
> You will find graphics in attachment.
>
>
> MySQL Summary
> =============
> As I supposed, table locking became very critical for access
> concurency...
> Any way very stable, no surprise...
>
> *Usage: excellent for read-oriented access with any number of clients
>        good for read+write access with small number of clients or writes
>
>
>
>
> PostgreSQL Summary
> ==================
> Excellent performances till 80 users and strange degradation with
> more...
> Sounds like internal locks/conurency problem...
> !!!Several core dumps during the test...
>
> *Usage: excellent for any kind access with small number of clients...
>
>
> Best regards,
> -Dimitri
>
> --
> ===========================================================
>  Dimitri KRAVTCHUK  (dim)           Sun Microsystems
>  Benchmark Engineer                 France
>  dimitri@france.sun.com             http://goldgate.france
> ===========================================================
>
>
>

Marc G. Fournier                   ICQ#7615664               IRC Nick: Scrappy
Systems Administrator @ hub.org
primary: scrappy@hub.org           secondary: scrappy@{freebsd|postgresql}.org


pgsql-general by date:

Previous
From: Denis Perchine
Date:
Subject: Re: Re: PostgreSQL vs MySQL
Next
From: Louis-David Mitterrand
Date:
Subject: currval(seq) fails if no nextval(seq) first