Re: Benchmarking PGSQL? - Mailing list pgsql-performance

From Merlin Moncure
Subject Re: Benchmarking PGSQL?
Date
Msg-id b42b73150702140820k7fa0afabn43ec61c3d1d881b7@mail.gmail.com
Whole thread Raw
In response to Re: Benchmarking PGSQL?  ("Luke Lonergan" <llonergan@greenplum.com>)
Responses Re: Benchmarking PGSQL?
List pgsql-performance
On 2/14/07, Luke Lonergan <llonergan@greenplum.com> wrote:
>
>  Here's one:
>
>  Insert performance is limited to about 10-12 MB/s no matter how fast the
> underlying I/O hardware.  Bypassing the WAL (write ahead log) only boosts
> this to perhaps 20 MB/s.  We've found that the biggest time consumer in the
> profile is the collection of routines that "convert to datum".
>
>  You can perform the test using any dataset, you might consider using the
> TPC-H benchmark kit with a data generator available at www.tpc.org.  Just
> generate some data, load the schema, then perform some COPY statements,
> INSERT INTO SELECT FROM and CREATE TABLE AS SELECT.

I am curious what is your take on the maximum insert performance, in
mb/sec of large bytea columns (toasted), and how much if any greenplum
was able to advance this over the baseline.  I am asking on behalf of
another interested party.  Interested in numbers broken down per core
on 8 core quad system and also aggreate.

merlin

pgsql-performance by date:

Previous
From: "Luke Lonergan"
Date:
Subject: Re: Benchmarking PGSQL?
Next
From: Mark Stosberg
Date:
Subject: reindex vs 'analyze' (was: Re: cube operations slower than geo_distance() on production server)