Re: LWLock Queue Jumping - Mailing list pgsql-hackers

From Stefan Kaltenbrunner
Subject Re: LWLock Queue Jumping
Date
Msg-id 4A9B8511.6090802@kaltenbrunner.cc
Whole thread Raw
In response to Re: LWLock Queue Jumping  (Jeff Janes <jeff.janes@gmail.com>)
List pgsql-hackers
Jeff Janes wrote:
> On Sun, Aug 30, 2009 at 11:01 AM, Stefan Kaltenbrunner 
> <stefan@kaltenbrunner.cc> wrote:
> 
>     Jeff Janes wrote:
> 
>            ---------- Forwarded message ----------
>            From: Stefan Kaltenbrunner <stefan@kaltenbrunner.cc>
>            To: Heikki Linnakangas <heikki.linnakangas@enterprisedb.com
>         <mailto:heikki.linnakangas@enterprisedb.com>
>            <mailto:heikki.linnakangas@enterprisedb.com
>         <mailto:heikki.linnakangas@enterprisedb.com>>>
>            Date: Sun, 30 Aug 2009 11:48:47 +0200
>            Subject: Re: LWLock Queue Jumping
>            Heikki Linnakangas wrote:
> 
> 
>                I don't have any pointers right now, but WALInsertLock does
>                often show
>                up as a bottleneck in write-intensive benchmarks.
> 
> 
>            yeah I recently ran accross that issue with testing
>         concurrent COPY
>            performance:
> 
>          
>          http://www.kaltenbrunner.cc/blog/index.php?/archives/27-Benchmarking-8.4-Chapter-2bulk-loading.html
>            discussed here:
> 
>            http://archives.postgresql.org/pgsql-hackers/2009-06/msg01019.php
> 
> 
>         It looks like this is the bulk loading of data into unindexed
>         tables.  How good is that as a target for optimization?  I can
>         see several (quite difficult to code and maintain) ways to make
>         bulk loading into unindexed tables faster, but they would not
>         speed up the more general cases.
> 
> 
>     well bulk loading into unindexed tables is quite a common workload -
>     apart from dump/restore cycles (which we can now do in parallel) a
>     lot of analytic workloads are that way.
>     Import tons of data from various sources every night/weeek/month,
>     index, analyze & aggregate, drop again.
> 
> 
> In those cases where you end by dropping the tables, we should be 
> willing to bypass WAL altogether, right?  Is the problem we can bypass 
> WAL (by doing the COPY in the same transaction that created or truncated 
> the table), or we can COPY in parallel, but we can't do both simultaneously?

well yes that is part of the problem - if you bulk load into one or few 
tables concurrently you can only sometimes make use of the WAL bypass 
optimization. This is especially interesting if you consider that COPY 
alone is more or less CPU bottlenecked these days so using multiple 
cores makes sense to get higher load rates.


Stefan


pgsql-hackers by date:

Previous
From: Zdenek Kotala
Date:
Subject: set_client_encoding is broken
Next
From: KaiGai Kohei
Date:
Subject: Re: [PATCH] Largeobject access controls