Re: dbt2 NOTPM numbers - Mailing list pgsql-performance

From Markus Schiltknecht
Subject Re: dbt2 NOTPM numbers
Date
Msg-id 46646065.50806@bluegap.ch
Whole thread Raw
In response to Re: dbt2 NOTPM numbers  (PFC <lists@peufeu.com>)
Responses Re: dbt2 NOTPM numbers  (Jim Nasby <decibel@decibel.org>)
List pgsql-performance
Hi,

PFC wrote:
>     You have a huge amount of iowait !

Yup.

>     Did you put the xlog on a separate disk ?

No, it's all one big RAID6 for the sake of simplicity (plus I doubt
somewhat, that 2 disks for WAL + 5 for data + 1 spare would be much
faster than 7 disks for WAL and data + 1 spare - considering that RAID6
needs two parity disks, that's 3 vs 5 disks for data...)

>     What filesystem do you use ?

XFS

>     Did you check that your BBU cache works ?

Thanks to you're hint, yes. I've attached the small python script, in
case it might help someone else, too.

>     For that run a dumb script which does INSERTS in a test table in
> autocommit mode ; if you get (7200rpm / 60) = 120 inserts / sec or less,
> the good news is that your drives don't lie about fsync, the bad news is
> that your BBU cache isn't working...

According to my little script, I constantly get somewhat around 6000
inserts per second, so I guess either my BBU works, or the drives are
lying ;-)   Simplistic troughput testing with dd gives > 200MB/s, which
also seems fine.


Obviously there's something else I'm doing wrong. I didn't really care
much about postgresql.conf, except setting a larger shared_buffers and a
reasonable effective_cache_size.


Oh, something else that's probably worth thinking about (and just came
to my mind again): the XFS is on a lvm2, on that RAID6.


Regards

Markus


Simplistic throughput testing with dd:

dd of=test if=/dev/zero bs=10K count=800000
800000+0 records in
800000+0 records out
8192000000 bytes (8.2 GB) copied, 37.3552 seconds, 219 MB/s
pamonth:/opt/dbt2/bb# dd if=test of=/dev/zero bs=10K count=800000
800000+0 records in
800000+0 records out
8192000000 bytes (8.2 GB) copied, 27.6856 seconds, 296 MB/s

#!/usr/bin/python

import sys, time
import psycopg

count = 500000

db = psycopg.connect("user=postgres dbname=test")
db.autocommit(True)

dbc = db.cursor()
dbc.execute("CREATE TABLE test (data TEXT);")

sys.stdout.flush()
start_t = time.time()
for i in range(count):
        dbc.execute("INSERT INTO test VALUES('insert no. %d');" % i)

diff_t = time.time() - start_t
print "%d inserts in %0.3f seconds, %f inserts/sec" % (count, diff_t, count / diff_t)

dbc.execute("DROP TABLE test;")

Version  1.03       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
pamonth          8G 48173  99 214016  33 93972  20 49244  92 266763  32 615.3   0
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 20510  82 +++++ +++ 25655  98 23954  99 +++++ +++ 25441  99

pamonth,8G,48173,99,214016,33,93972,20,49244,92,266763,32,615.3,0,16,20510,82,+++++,+++,25655,98,23954,99,+++++,+++,25441,99


pgsql-performance by date:

Previous
From: Thomas Andrews
Date:
Subject: Re: Thousands of tables versus on table?
Next
From: Thomas Andrews
Date:
Subject: Re: Thousands of tables versus on table?