Thread: Performance degradation of inserts when database size grows

Performance degradation of inserts when database size grows

From
Andrey Vorobiev
Date:

Hi, guys.


I have following environment configuration

- Postgres 8.4.7 with following postresql.conf settings modified:

      listen_addresses = '*' 

      max_connections = 100


      shared_buffers = 2048MB 

      max_prepared_transactions = 100

      wal_buffers = 1024kB


      checkpoint_segments = 64

      checkpoint_completion_target = 0.8


      log_checkpoints = on


- Two databases. Let's call them db_1 and db_2

- J2EE application server that performs inserts into databases defined above. (distribution transactions are used).

- All constraints and indexes are on.

- JMeter that acts as HTTP client and sends requests to server causing it to insert rows. (case of new users registration)


After running scenario scenario described above (with 10 concurrent threads) I have observed following behavior:


For the first time everything is fine and J2EE server handles about 700 requests/sec (about 2500 inserts into several tables per second). But after some amount of time I observe performance degradation. In general it looks like the following:


Total number of requests passed; Requests per second;

382000;   768;

546000;   765;

580000;   723;

650000;   700;

671000;   656;

700000;   628;


Checkpoint logging gives me the following:

2011-05-17 18:55:51 NOVST LOG:  checkpoint starting: xlog

2011-05-17 18:57:20 NOVST LOG:  checkpoint complete: wrote 62861 buffers (24.0%); 0 transaction log file(s) added, 0 removed, 0 recycled; write=89.196 s, sync=0.029 s, total=89.242 s

2011-05-17 18:57:47 NOVST LOG:  checkpoint starting: xlog

2011-05-17 18:59:02 NOVST LOG:  checkpoint complete: wrote 83747 buffers (31.9%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=75.001 s, sync=0.043 s, total=75.061 s

2011-05-17 18:59:29 NOVST LOG:  checkpoint starting: xlog

2011-05-17 19:00:30 NOVST LOG:  checkpoint complete: wrote 97341 buffers (37.1%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=60.413 s, sync=0.050 s, total=60.479 s

2011-05-17 19:00:55 NOVST LOG:  checkpoint starting: xlog

2011-05-17 19:01:48 NOVST LOG:  checkpoint complete: wrote 110149 buffers (42.0%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=52.285 s, sync=0.072 s, total=52.379 s

2011-05-17 19:02:11 NOVST LOG:  checkpoint starting: xlog

2011-05-17 19:02:58 NOVST LOG:  checkpoint complete: wrote 120003 buffers (45.8%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=46.766 s, sync=0.082 s, total=46.864 s

2011-05-17 19:03:20 NOVST LOG:  checkpoint starting: xlog

2011-05-17 19:04:18 NOVST LOG:  checkpoint complete: wrote 122296 buffers (46.7%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=57.795 s, sync=0.054 s, total=57.867 s

2011-05-17 19:04:38 NOVST LOG:  checkpoint starting: xlog

2011-05-17 19:05:34 NOVST LOG:  checkpoint complete: wrote 128165 buffers (48.9%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=55.061 s, sync=0.087 s, total=55.188 s

2011-05-17 19:05:53 NOVST LOG:  checkpoint starting: xlog

2011-05-17 19:06:51 NOVST LOG:  checkpoint complete: wrote 138508 buffers (52.8%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=57.919 s, sync=0.106 s, total=58.068 s

2011-05-17 19:07:08 NOVST LOG:  checkpoint starting: xlog

2011-05-17 19:08:21 NOVST LOG:  checkpoint complete: wrote 132485 buffers (50.5%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=72.949 s, sync=0.081 s, total=73.047 s

2011-05-17 19:08:40 NOVST LOG:  checkpoint starting: xlog

2011-05-17 19:09:48 NOVST LOG:  checkpoint complete: wrote 139542 buffers (53.2%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=68.193 s, sync=0.107 s, total=68.319 s

2011-05-17 19:10:06 NOVST LOG:  checkpoint starting: xlog

2011-05-17 19:11:31 NOVST LOG:  checkpoint complete: wrote 137657 buffers (52.5%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=84.575 s, sync=0.047 s, total=84.640 s


Also I observed more heavy  IO from iostat utility.


So my questions are:

1. How does database size affect insert performance?

2. Why does number of written buffers increase when database size grows?

3. How can I further analyze this problem?
--
Best regards.

Re: Performance degradation of inserts when database size grows

From
Leonardo Francalanci
Date:

>1. How does database size affect insert performance?
>2. Why does number of written buffers increase when database size grows?

It might be related to indexes. Indexes size affect insert performance.

>3. How can I further analyze this problem?


Try without indexes?

Re: Performance degradation of inserts when database size grows

From
Shaun Thomas
Date:
On 05/17/2011 07:45 AM, Andrey Vorobiev wrote:

> 2011-05-17 18:55:51 NOVST LOG: checkpoint starting: xlog
> 2011-05-17 18:57:20 NOVST LOG: checkpoint complete: wrote 62861 buffers
> (24.0%); 0 transaction log file(s) added, 0 removed, 0 recycled;
> write=89.196 s, sync=0.029 s, total=89.242 s

Increase your checkpoint_segments. If you see "checkpoint starting:
xlog" instead of "checkpoint starting: time", you don't have enough
checkpoint segments to handle your writes. Checkpoints *will* degrade
your throughput.

--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604
312-676-8870
sthomas@peak6.com

______________________________________________

See  http://www.peak6.com/email_disclaimer.php
for terms and conditions related to this email

Re: Performance degradation of inserts when database size grows

From
Tomas Vondra
Date:
Dne 23.5.2011 15:30, Shaun Thomas napsal(a):
> On 05/17/2011 07:45 AM, Andrey Vorobiev wrote:
>
>> 2011-05-17 18:55:51 NOVST LOG: checkpoint starting: xlog
>> 2011-05-17 18:57:20 NOVST LOG: checkpoint complete: wrote 62861 buffers
>> (24.0%); 0 transaction log file(s) added, 0 removed, 0 recycled;
>> write=89.196 s, sync=0.029 s, total=89.242 s
>
> Increase your checkpoint_segments. If you see "checkpoint starting:
> xlog" instead of "checkpoint starting: time", you don't have enough
> checkpoint segments to handle your writes. Checkpoints *will* degrade
> your throughput.
>

Really? He already has 64 checkpoint segments, which is about 1GB of
xlog data. The real problem is that the amount of buffers to write is
constantly growing. At the beginning there's 62861 buffers (500MB) and
at the end there's 137657 buffers (1GB).

IMHO increasing the number of checkpoint segments would make this
disruption even worse.

What I don't understand is that the checkpoint time does not increase
with the amount of data to write. Writing the

   62861 buffers      total=89.242 s    ( 5 MB/s)
   83747 buffers      total=75.061 s    ( 9 MB/s)
   97341 buffers      total=60.479 s    (13 MB/s)
  110149 buffers      total=52.379 s    (17 MB/s)
  120003 buffers      total=46.864 s    (20 MB/s)
  122296 buffers      total=57.867 s    (17 MB/s)
  128165 buffers      total=55.188 s    (18 MB/s)
  138508 buffers      total=58.068 s    (19 MB/s)
  132485 buffers      total=73.047 s    (14 MB/s)
  139542 buffers      total=68.319 s    (16 MB/s)
  137657 buffers      total=84.640 s    (13 MB/s)

Maybe this depends on what sections of the files are modified
(contiguous vs. not contiguous), but I doubt it.

In 9.1 there's a feature that spreads checkpoint writes, but with 8.4
that's not possible. I think think this might be tuned using background
writer, just make it more aggressive.

- bgwriter_delay (decrease)
- bgwriter_lru_maxpages (increase)
- bgwriter_lru_multiplier (increase)

regards
Tomas

Re: Performance degradation of inserts when database size grows

From
Robert Haas
Date:
On Mon, May 23, 2011 at 2:46 PM, Tomas Vondra <tv@fuzzy.cz> wrote:
> Really? He already has 64 checkpoint segments, which is about 1GB of
> xlog data. The real problem is that the amount of buffers to write is
> constantly growing. At the beginning there's 62861 buffers (500MB) and
> at the end there's 137657 buffers (1GB).
>
> IMHO increasing the number of checkpoint segments would make this
> disruption even worse.

Maybe - but it would also make the checkpoints less frequent, which
might be a good thing.

> In 9.1 there's a feature that spreads checkpoint writes, but with 8.4
> that's not possible.

What feature are you referring to here?  Checkpoint spreading was
added in 8.3, IIRC.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Re: Performance degradation of inserts when database size grows

From
Robert Haas
Date:
On Tue, May 17, 2011 at 8:45 AM, Andrey Vorobiev
<andrey.o.vorobiev@gmail.com> wrote:
> 1. How does database size affect insert performance?

Well, if your database gets bigger, then your indexes will become
deeper, requiring more time to update.  But I'm not sure that's your
problem here.

> 2. Why does number of written buffers increase when database size grows?

It normally doesn't.

> 3. How can I further analyze this problem?

Are you actually inserting more user data into these tables, so that
they have more and more rows as time goes by, or are the data files
getting larger out of proportion to the amount of useful data in them?

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Re: Performance degradation of inserts when database size grows

From
Tomas Vondra
Date:
Dne 23.5.2011 21:05, Robert Haas napsal(a):
> On Mon, May 23, 2011 at 2:46 PM, Tomas Vondra <tv@fuzzy.cz> wrote:
>> Really? He already has 64 checkpoint segments, which is about 1GB of
>> xlog data. The real problem is that the amount of buffers to write is
>> constantly growing. At the beginning there's 62861 buffers (500MB) and
>> at the end there's 137657 buffers (1GB).
>>
>> IMHO increasing the number of checkpoint segments would make this
>> disruption even worse.
>
> Maybe - but it would also make the checkpoints less frequent, which
> might be a good thing.
>
>> In 9.1 there's a feature that spreads checkpoint writes, but with 8.4
>> that's not possible.
>
> What feature are you referring to here?  Checkpoint spreading was
> added in 8.3, IIRC.

You're absolutely right, I was talking about

  checkpoint_completion_target

and it was added in 8.3. Your memory is obviously better than mine.

Tomas

Re: Performance degradation of inserts when database size grows

From
Terry Schmitt
Date:
As near as I can tell from your test configuration description, you have JMeter --> J2EE --> Postgres.
Have you ruled out the J2EE server as the problem? This problem may not be the database.
I would take a look at your app server's health and look for any potential issues there before spending too much time on the database. Perhaps there are memory issues or excessive garbage collection on the app server?

Terry


On Tue, May 17, 2011 at 5:45 AM, Andrey Vorobiev <andrey.o.vorobiev@gmail.com> wrote:

Hi, guys.


I have following environment configuration

- Postgres 8.4.7 with following postresql.conf settings modified:

      listen_addresses = '*' 

      max_connections = 100


      shared_buffers = 2048MB 

      max_prepared_transactions = 100

      wal_buffers = 1024kB


      checkpoint_segments = 64

      checkpoint_completion_target = 0.8


      log_checkpoints = on


- Two databases. Let's call them db_1 and db_2

- J2EE application server that performs inserts into databases defined above. (distribution transactions are used).

- All constraints and indexes are on.

- JMeter that acts as HTTP client and sends requests to server causing it to insert rows. (case of new users registration)


After running scenario scenario described above (with 10 concurrent threads) I have observed following behavior:


For the first time everything is fine and J2EE server handles about 700 requests/sec (about 2500 inserts into several tables per second). But after some amount of time I observe performance degradation. In general it looks like the following:


Total number of requests passed; Requests per second;

382000;   768;

546000;   765;

580000;   723;

650000;   700;

671000;   656;

700000;   628;


Checkpoint logging gives me the following:

2011-05-17 18:55:51 NOVST LOG:  checkpoint starting: xlog

2011-05-17 18:57:20 NOVST LOG:  checkpoint complete: wrote 62861 buffers (24.0%); 0 transaction log file(s) added, 0 removed, 0 recycled; write=89.196 s, sync=0.029 s, total=89.242 s

2011-05-17 18:57:47 NOVST LOG:  checkpoint starting: xlog

2011-05-17 18:59:02 NOVST LOG:  checkpoint complete: wrote 83747 buffers (31.9%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=75.001 s, sync=0.043 s, total=75.061 s

2011-05-17 18:59:29 NOVST LOG:  checkpoint starting: xlog

2011-05-17 19:00:30 NOVST LOG:  checkpoint complete: wrote 97341 buffers (37.1%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=60.413 s, sync=0.050 s, total=60.479 s

2011-05-17 19:00:55 NOVST LOG:  checkpoint starting: xlog

2011-05-17 19:01:48 NOVST LOG:  checkpoint complete: wrote 110149 buffers (42.0%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=52.285 s, sync=0.072 s, total=52.379 s

2011-05-17 19:02:11 NOVST LOG:  checkpoint starting: xlog

2011-05-17 19:02:58 NOVST LOG:  checkpoint complete: wrote 120003 buffers (45.8%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=46.766 s, sync=0.082 s, total=46.864 s

2011-05-17 19:03:20 NOVST LOG:  checkpoint starting: xlog

2011-05-17 19:04:18 NOVST LOG:  checkpoint complete: wrote 122296 buffers (46.7%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=57.795 s, sync=0.054 s, total=57.867 s

2011-05-17 19:04:38 NOVST LOG:  checkpoint starting: xlog

2011-05-17 19:05:34 NOVST LOG:  checkpoint complete: wrote 128165 buffers (48.9%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=55.061 s, sync=0.087 s, total=55.188 s

2011-05-17 19:05:53 NOVST LOG:  checkpoint starting: xlog

2011-05-17 19:06:51 NOVST LOG:  checkpoint complete: wrote 138508 buffers (52.8%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=57.919 s, sync=0.106 s, total=58.068 s

2011-05-17 19:07:08 NOVST LOG:  checkpoint starting: xlog

2011-05-17 19:08:21 NOVST LOG:  checkpoint complete: wrote 132485 buffers (50.5%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=72.949 s, sync=0.081 s, total=73.047 s

2011-05-17 19:08:40 NOVST LOG:  checkpoint starting: xlog

2011-05-17 19:09:48 NOVST LOG:  checkpoint complete: wrote 139542 buffers (53.2%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=68.193 s, sync=0.107 s, total=68.319 s

2011-05-17 19:10:06 NOVST LOG:  checkpoint starting: xlog

2011-05-17 19:11:31 NOVST LOG:  checkpoint complete: wrote 137657 buffers (52.5%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=84.575 s, sync=0.047 s, total=84.640 s


Also I observed more heavy  IO from iostat utility.


So my questions are:

1. How does database size affect insert performance?

2. Why does number of written buffers increase when database size grows?

3. How can I further analyze this problem?
--
Best regards.


Re: Performance degradation of inserts when database size grows

From
Tomas Vondra
Date:
Dne 24.5.2011 07:24, Terry Schmitt napsal(a):
> As near as I can tell from your test configuration description, you have
> JMeter --> J2EE --> Postgres.
> Have you ruled out the J2EE server as the problem? This problem may not
> be the database.
> I would take a look at your app server's health and look for any
> potential issues there before spending too much time on the database.
> Perhaps there are memory issues or excessive garbage collection on the
> app server?

It might be part of the problem, yes, but it's just a guess. We need to
se some iostat / iotop / vmstat output to confirm that.

The probable cause here is that the indexes grow with the table, get
deeper, so when you insert a new row you need to modify more and more
pages. That's why the number of buffers grows over time and the
checkpoint takes more and more time (the average write speed is about 15
MB/s - not sure if that's good or bad performance).

The question is whether this is influenced by other activity (Java GC or
something)

I see three ways to improve the checkpoint performance:

  1) set checkpoint_completion_target = 0.9 or something like that
     (this should spread the checkpoint, but it also increases the
     amount of checkpoint segments to keep)

  2) make the background writer more aggressive (tune the bgwriter_*
     variables), this is similar to (1)

  3) improve the write performance (not sure how random the I/O is in
     this case, but a decent controller with a cache might help)

and then two ways to decrease the index overhead / amount of modified
buffers

  1) keep only the really necessary indexes (remove duplicate, indexes,
     remove indexes where another index already performs reasonably,
     etc.)

  2) partition the table (so that only indexes on the current partition
     will be modified, and those will be more shallow)

Tomas

Re: Performance degradation of inserts when database size grows

From
Greg Smith
Date:
On 05/17/2011 08:45 AM, Andrey Vorobiev wrote:
> 1. How does database size affect insert performance?

As indexes grow, it becomes slower to insert into them.  It has to
navigate all of the indexes on the table to figure out where to add the
new row into there, and that navigation time goes up when tables are
larger.  Try using the queries at
http://wiki.postgresql.org/wiki/Disk_Usage to quantify how big your
indexes are.  Many people are absolutely shocked to see how large they
become.  And some database designers throw indexes onto every possible
column combination as if they were free.

> 2. Why does number of written buffers increase when database size grows?
>

As indexes grow, the changes needed to insert more rows get spread over
more blocks too.

You can install pg_buffercache and analyze what's actually getting dirty
in the buffer cache to directly measure what's changing here.  If you
look at http://projects.2ndquadrant.com/talks and download the "Inside
the PostgreSQL Buffer Cache" talk and its "Sample Queries" set, those
will give you some examples of how to summarize everything.

> 3. How can I further analyze this problem?

This may not actually be a problem in that it's something you can
resolve.  If you assume that you can insert into a giant table at the
same speed you can insert into a trivial one, you'll have to adjust your
thinking because that's never going to be true.  Removing some indexes
may help; reducing the columns in the index is also good; and some
people end up partitioning their data specifically to help with this
situation.  It's also possible to regain some of the earlier performance
using things like REINDEX and CLUSTER.

--
Greg Smith   2ndQuadrant US    greg@2ndQuadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us
"PostgreSQL 9.0 High Performance": http://www.2ndQuadrant.com/books