Thread: speed on Postgresql compared to Mysql

speed on Postgresql compared to Mysql

From
"Livio Righetti"
Date:
Hi,

Talking about insert, I know Mysql is fast than Postgresql.

I've made the following test :

40'000 insert (accouting context) using Perl and dbd :

Postgresql :
  text       :  4 min 53 s
  varchar :  4 min 49 s
  char      :  4 min 49 s

Mysql :
  text       : 0 min 29 s
  varchar : 0 min 29 s
  char      : 0 min 29 s

So we can see Mysql is about 10 times fast.

Also we used Postgresql for Radius (authentication) et we have to make 3
vacuum per day otherwise the first server is overload and the user go to the
backup server.

Is it normal or my Postgresql is not well configured ?

Thanks in advance.



Re: speed on Postgresql compared to Mysql

From
"Oliver Elphick"
Date:
"Livio Righetti" wrote:
  >Hi,
  >
  >Talking about insert, I know Mysql is fast than Postgresql.
  >
  >I've made the following test :
  >
  >40'000 insert (accouting context) using Perl and dbd :
  >
  >Postgresql :
  >  text       :  4 min 53 s
  >  varchar :  4 min 49 s
  >  char      :  4 min 49 s
  >
  >Mysql :
  >  text       : 0 min 29 s
  >  varchar : 0 min 29 s
  >  char      : 0 min 29 s
  >
  >So we can see Mysql is about 10 times fast.

That's extremely simplistic.

If these are separate inserts, without an enclosing transaction, they
will also be separate transactions.  40000 transactions is a lot of
overhead.

If you are not using transactions in MySql, you are inviting loss of
data integrity in the event of anything's going wrong.  Of course, the
same applies if you are not making proper use of PostgreSQL's
transactions.

There are a lot of considerations apart from raw speed.  For a
database, I think that data integrity is a lot more important.

  >Also we used Postgresql for Radius (authentication) et we have to make 3
  >vacuum per day otherwise the first server is overload and the user go to the
  >backup server.
  >
  >Is it normal or my Postgresql is not well configured ?

If you are making extensive changes to tables, you need to vacuum them
frequently to recover wasted space and remove deleted entries.

Why are these tables changing so frequently?  Perhaps the situation
could be improved by some redesign of the application?

Again, you may improve performance by installing a later version of
PostgreSQL.  For example, a very recent change (in 7.1RC1, I think)
improved the speed of COPYing 167000 records into a table with RI
constraints from about 2 hours to 5 minutes.

--
Oliver Elphick                                Oliver.Elphick@lfix.co.uk
Isle of Wight                              http://www.lfix.co.uk/oliver
PGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47  6B 7E 39 CC 56 E4 C1 47
GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839  932A 614D 4C34 3E1D 0C1C
                 ========================================
     "Do not be anxious about anything, but in everything,
      by prayer and supplication, with thanksgiving, present
      your requests to God. And the peace of God, which
      transcends all understanding, will guard your hearts
      and your minds in Christ Jesus."   Philippians 4:6,7



Re: speed on Postgresql compared to Mysql

From
Joel Burton
Date:
On Tue, 3 Apr 2001, Livio Righetti wrote:

> Hi,
>
> Talking about insert, I know Mysql is fast than Postgresql.
>
> I've made the following test :
>
> 40'000 insert (accouting context) using Perl and dbd :
>
> Postgresql :
>   text       :  4 min 53 s
>   varchar :  4 min 49 s
>   char      :  4 min 49 s
>
> Mysql :
>   text       : 0 min 29 s
>   varchar : 0 min 29 s
>   char      : 0 min 29 s
>
> So we can see Mysql is about 10 times fast.
>
> Also we used Postgresql for Radius (authentication) et we have to make 3
> vacuum per day otherwise the first server is overload and the user go to the
> backup server.
>
> Is it normal or my Postgresql is not well configured ?

Err, yes.

Did you just do 40,000 inserts in a row, one after another? Realistic
speed tests often have many requests coming in together, to simulate
application- and web-usage.

In addition, did you wrap this in a transaction? Otherwise, you're
performing one transaction for *every single* insert, which is much slower
than in a a transaction.

(Generally speaking, if you want to just add 40,000 rows to a table, I'd
use COPY, not INSERT ;-) )

HTH,

--
Joel Burton   <jburton@scw.org>
Director of Information Systems, Support Center of Washington


Re: Re: speed on Postgresql compared to Mysql

From
The Hermit Hacker
Date:
In addition, what are your startup options, and what version of PostgreSQL
are you using?

On Sun, 8 Apr 2001, Joel Burton wrote:

> On Tue, 3 Apr 2001, Livio Righetti wrote:
>
> > Hi,
> >
> > Talking about insert, I know Mysql is fast than Postgresql.
> >
> > I've made the following test :
> >
> > 40'000 insert (accouting context) using Perl and dbd :
> >
> > Postgresql :
> >   text       :  4 min 53 s
> >   varchar :  4 min 49 s
> >   char      :  4 min 49 s
> >
> > Mysql :
> >   text       : 0 min 29 s
> >   varchar : 0 min 29 s
> >   char      : 0 min 29 s
> >
> > So we can see Mysql is about 10 times fast.
> >
> > Also we used Postgresql for Radius (authentication) et we have to make 3
> > vacuum per day otherwise the first server is overload and the user go to the
> > backup server.
> >
> > Is it normal or my Postgresql is not well configured ?
>
> Err, yes.
>
> Did you just do 40,000 inserts in a row, one after another? Realistic
> speed tests often have many requests coming in together, to simulate
> application- and web-usage.
>
> In addition, did you wrap this in a transaction? Otherwise, you're
> performing one transaction for *every single* insert, which is much slower
> than in a a transaction.
>
> (Generally speaking, if you want to just add 40,000 rows to a table, I'd
> use COPY, not INSERT ;-) )
>
> HTH,
>
> --
> Joel Burton   <jburton@scw.org>
> Director of Information Systems, Support Center of Washington
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 3: if posting/reading through Usenet, please send an appropriate
> subscribe-nomail command to majordomo@postgresql.org so that your
> message can get through to the mailing list cleanly
>

Marc G. Fournier                   ICQ#7615664               IRC Nick: Scrappy
Systems Administrator @ hub.org
primary: scrappy@hub.org           secondary: scrappy@{freebsd|postgresql}.org


Re: speed on Postgresql compared to Mysql

From
Lincoln Yeoh
Date:
At 05:30 AM 08-04-2001 -0400, Joel Burton wrote:
>On Tue, 3 Apr 2001, Livio Righetti wrote:
>> Also we used Postgresql for Radius (authentication) et we have to make 3
>> vacuum per day otherwise the first server is overload and the user go to
the
>> backup server.
>>
>> Is it normal or my Postgresql is not well configured ?
>
>Err, yes.
>
>Did you just do 40,000 inserts in a row, one after another? Realistic
>speed tests often have many requests coming in together, to simulate
>application- and web-usage.
>
>In addition, did you wrap this in a transaction? Otherwise, you're
>performing one transaction for *every single* insert, which is much slower
>than in a a transaction.
>
>(Generally speaking, if you want to just add 40,000 rows to a table, I'd
>use COPY, not INSERT ;-) )

I don't think COPY is useful or relevant in a normal ISP authentication
logging scenario.

Wrapping more than one insert doesn't help either. I think you would
normally want to commit each customer's transaction individually.

From his figures he can sustain about 136 inserts a second. Is that good
enough for peak loads at a medium to big ISP?

Cheerio,
Link.


Re: speed on Postgresql compared to Mysql

From
Joel Burton
Date:
On Mon, 9 Apr 2001, Lincoln Yeoh wrote:

> At 05:30 AM 08-04-2001 -0400, Joel Burton wrote:
> >On Tue, 3 Apr 2001, Livio Righetti wrote:
> >> Also we used Postgresql for Radius (authentication) et we have to make 3
> >> vacuum per day otherwise the first server is overload and the user go to
> the
> >> backup server.
> >>
> >> Is it normal or my Postgresql is not well configured ?
> >
> >Err, yes.
> >
> >Did you just do 40,000 inserts in a row, one after another? Realistic
> >speed tests often have many requests coming in together, to simulate
> >application- and web-usage.
> >
> >In addition, did you wrap this in a transaction? Otherwise, you're
> >performing one transaction for *every single* insert, which is much slower
> >than in a a transaction.
> >
> >(Generally speaking, if you want to just add 40,000 rows to a table, I'd
> >use COPY, not INSERT ;-) )
>
> I don't think COPY is useful or relevant in a normal ISP authentication
> logging scenario.
>
> Wrapping more than one insert doesn't help either. I think you would
> normally want to commit each customer's transaction individually.
>
> >From his figures he can sustain about 136 inserts a second. Is that good
> enough for peak loads at a medium to big ISP?

Well, I confess I was being a bit facetious. No, COPY isn't generally
useful web apps, and, in many cases, you would handle each transaction
separately.

However, scheduling 40,000 INSERTs, all neatly following one after
another, and measuring how long that takes isn't very realistic
either! :-)

136 of anything a second seems good to me -- unless one is tracking micro
things, like all TCP/IP requests made by all users at an ISP. Given the
inexpensive price of hardware and the expensive cost of programmer time,
it usually seems better to throw some money at 512MB, 7200RPM SCSI drives
and such, rather that at paying a technologist to code in lots of
Perl/PHP/Python/Pwhatever to build all the stuff into your web app that
MySQL can't do for you.


--
Joel Burton   <jburton@scw.org>
Director of Information Systems, Support Center of Washington