Thread: General performance questions about postgres on Apple hardware...

General performance questions about postgres on Apple hardware...

From
Sean Shanny
Date:
To all,

This is a 2 question email.  First is asking about general tuning of the
Apple hardware/postgres combination.  The second is whether is is
possible to speed up a particular query.

PART 1

Hardware:  Apple G5 dual 2.0 with 8GB memory attached via dual fibre
channel to a fully loaded 3.5TB XRaid.  The XRaid is configured as two 7
disk hardware based RAID5 sets software striped to form a RAID50 set.
The DB, WALS, etc are all on that file set.  Running OSX journaled file
system  Running postgres 7.4.1.  OSX Server 10.3.2  Postgres is compiled
locally with  '--enable-recode' '--enable-multibyte=UNICODE'
'CFLAGS=-mcpu=970 -mtune=970 -mpowerpc64 -O3'

Config stuff that we have changed:

tcpip_socket = true
max_connections = 100

# - Memory -

shared_buffers = 16000          # min 16, at least max_connections*2,
8KB each
sort_mem = 256000               # min 64, size in KB
vacuum_mem = 64000              # min 1024, size in KB
fsync = true                    # turns forced synchronization on or off
wal_sync_method = open_sync     # the default varies across platforms:
                                # fsync, fdatasync, open_sync, or
open_datasync
wal_buffers = 64                # min 4, 8KB each
checkpoint_segments = 300       # in logfile segments, min 1, 16MB each
checkpoint_timeout = 30 # range 30-3600, in seconds
effective_cache_size = 400000   # typically 8KB each
random_page_cost = 1            # units are one sequential page fetch cost
default_statistics_target = 1000        # range 1-1000


We are generally getting poor performance out of the RAID set, they
claim 200/MB/sec per channel, the best we can get with straight OS based
data transfers is 143MB/sec. :-( (we have a call into apple about this)
When I execute the following, d_url is a big table,

create table temp_url as select * from d_url ;

I would expect to bound by IO but via iostat we are seeing only about
30mb/sec with bursts of 100+ when the WAL is written.  sy is high as
well and the tps seems low.

Can anyone shed some light on what we might do to improve performance
for postgres on this platform?  Also, is there a test that is available
that would we could run to show the maximum postgres can do on this
platform?  This is a data warehouse system so generally we only have 1-3
queries running at anytime.  More often only 1.  We are obviously
working with very large tables so we are interested in maximizing our IO
throughput.

          disk1           disk2           disk0       cpu
  KB/t tps  MB/s   KB/t tps  MB/s   KB/t tps  MB/s  us sy id
 17.04 961 15.99  17.16 957 16.03   8.83   6  0.05  12 32 56
 22.75 580 12.89  22.79 578 12.87   0.00   0  0.00  10 34 56
 24.71 586 14.14  24.67 587 14.14   0.00   0  0.00  12 40 48
 21.98 648 13.91  21.97 648 13.91   0.00   0  0.00  16 27 56
 22.07 608 13.10  22.09 607 13.09   0.00   0  0.00  14 29 57
 26.54 570 14.77  26.37 575 14.80   0.00   0  0.00  12 34 54
 18.91 646 11.93  18.90 646 11.93   0.00   0  0.00   9 33 58
 15.12 636  9.38  15.12 636  9.38   0.00   0  0.00  14 22 64
 16.22 612  9.69  16.23 611  9.68   0.00   0  0.00  20 27 54
 15.02 573  8.41  15.01 574  8.41   0.00   0  0.00  14 29 57
 15.54 593  9.00  15.52 595  9.02   0.00   0  0.00  13 28 59
 22.35 596 13.01  22.42 593 12.99   0.00   0  0.00   9 32 58
 61.57 887 53.33  60.73 901 53.43   4.00   1  0.00   8 48 44
 11.13 2173 23.62  11.13 2167 23.54   0.00   0  0.00  10 68 22
 10.07 2402 23.63  10.20 2368 23.58   4.00   1  0.00  10 72 18
 14.75 1110 15.99  14.74 1116 16.06   8.92   6  0.05  12 42 46
 22.79 510 11.36  22.79 510 11.36   0.00   0  0.00  16 28 56
 23.65 519 11.99  23.50 522 11.98   0.00   0  0.00  13 42 46
 22.45 592 12.98  22.45 592 12.98   0.00   0  0.00  14 27 58
 25.38 579 14.35  25.37 579 14.35   0.00   0  0.00   8 36 56


PART 2

Trying to understand if there is a faster way to do this?  This is part
of our nightly bulk load of a data warehouse.  We are reading in new
data, pulling out the relevant bits, and then need to check to see if
they already exist in the dimension tables.  Use to do this via separate
lookups for each value, not very fast.  Trying to do this all in the DB now.

The query is

SELECT t1.id, t2.md5, t2.url FROM referral_temp t2 LEFT OUTER JOIN
d_referral t1 ON t2.md5 = t1.referral_md5;


\d d_referral
 id                 | integer | not null
 referral_md5       | text    | not null
 referral_raw_url   | text    | not null
 referral_host      | text    |
 referral_path      | text    |
 referral_query     | text    |
 job_control_number | integer | not null


\d referral_temp
 md5    | text |
 url    | text |

Actual row count in the temp table:

select count(*) from referral_temp ;
  502347

Actual row count in d_referral table:

select count(*) from d_referral ;
  27908024


Note: that an analyze had not been performed on the referral_temp table
prior to the explain analyze run.

explain analyze  SELECT t1.id, t2.md5, t2.url from referral_temp t2 LEFT
OUTER JOIN d_referral t1 ON t2.md5 = t1.referral_md5

Nested Loop Left Join  (cost=0.00..3046.00 rows=1001 width=68) (actual
time=136.513..6440616.541 rows=502347 loops=1)
   ->  Seq Scan on referral_temp t2  (cost=0.00..20.00 rows=1000
width=64) (actual time=21.730..10552.421 rows=502347 loops=1)
   ->  Index Scan using d_referral_referral_md5_key on d_referral t1
(cost=0.00..3.01 rows=1 width=40) (actual time=12.768..14.022 rows=1
loops=502347)
         Index Cond: ("outer".md5 = t1.referral_md5)


Thanks.

--sean
 Total runtime: 6441969.698 ms
(5 rows)


Here is an explain analyze after the analyze was done.  Unfortunately I
think a lot of the data was still in cache when I did this again :-(

explain analyze  SELECT t1.id, t2.md5, t2.url from referral_temp t2 LEFT
OUTER JOIN d_referral t1 ON t2.md5 = t1.referral_md5;

Nested Loop Left Join  (cost=0.00..1468759.69 rows=480082 width=149)
(actual time=69.576..3226854.850 rows=502347 loops=1)
   ->  Seq Scan on referral_temp t2  (cost=0.00..16034.81 rows=480081
width=145) (actual time=11.206..4003.521 rows=502347 loops=1)
   ->  Index Scan using d_referral_referral_md5_key on d_referral t1
(cost=0.00..3.01 rows=1 width=40) (actual time=6.396..6.402 rows=1
loops=502347)
         Index Cond: ("outer".md5 = t1.referral_md5)
 Total runtime: 3227830.752 ms


Re: General performance questions about postgres on Apple

From
"scott.marlowe"
Date:
On Fri, 20 Feb 2004, Sean Shanny wrote:

> max_connections = 100
>
> # - Memory -
>
> shared_buffers = 16000          # min 16, at least max_connections*2,
> 8KB each
> sort_mem = 256000               # min 64, size in KB

You might wanna drop sort_mem somewhat and just set it during your imports
to something big like 512000 or larger.  That way with 100 users during
the day you won't have to worry about swap storms, and when you run your
updates, you get all that sort_mem.

> Actual row count in the temp table:
>
> select count(*) from referral_temp ;
>   502347
>
> Actual row count in d_referral table:
>
> select count(*) from d_referral ;
>   27908024
>
>
> Note: that an analyze had not been performed on the referral_temp table
> prior to the explain analyze run.
>
> explain analyze  SELECT t1.id, t2.md5, t2.url from referral_temp t2 LEFT
> OUTER JOIN d_referral t1 ON t2.md5 = t1.referral_md5
>
> Nested Loop Left Join  (cost=0.00..3046.00 rows=1001 width=68) (actual
> time=136.513..6440616.541 rows=502347 loops=1)
>    ->  Seq Scan on referral_temp t2  (cost=0.00..20.00 rows=1000
> width=64) (actual time=21.730..10552.421 rows=502347 loops=1)
>    ->  Index Scan using d_referral_referral_md5_key on d_referral t1
> (cost=0.00..3.01 rows=1 width=40) (actual time=12.768..14.022 rows=1
> loops=502347)
>          Index Cond: ("outer".md5 = t1.referral_md5)
>
>
> Thanks.
>
> --sean
>  Total runtime: 6441969.698 ms
> (5 rows)
>
>
> Here is an explain analyze after the analyze was done.  Unfortunately I
> think a lot of the data was still in cache when I did this again :-(
>
> explain analyze  SELECT t1.id, t2.md5, t2.url from referral_temp t2 LEFT
> OUTER JOIN d_referral t1 ON t2.md5 = t1.referral_md5;
>
> Nested Loop Left Join  (cost=0.00..1468759.69 rows=480082 width=149)
> (actual time=69.576..3226854.850 rows=502347 loops=1)
>    ->  Seq Scan on referral_temp t2  (cost=0.00..16034.81 rows=480081
> width=145) (actual time=11.206..4003.521 rows=502347 loops=1)
>    ->  Index Scan using d_referral_referral_md5_key on d_referral t1
> (cost=0.00..3.01 rows=1 width=40) (actual time=6.396..6.402 rows=1
> loops=502347)
>          Index Cond: ("outer".md5 = t1.referral_md5)
>  Total runtime: 3227830.752 ms

Hmmm.  It looks like postgresql is still picking a nested loop when it
should be sorting something faster.  Try doing a "set enable_nestloop =
off" and see what you get.

If that makes it faster, you may want to adjust the costs of the cpu_*
stuff higher to see if that can force it to do the right thing.

Looking at the amount of time taken by the nested loop, it looks like the
problem to me.

And why are you doing a left join of ONE row from one table against the
whole temp table?  Do you really need to do that?  since there's only one
row in the source table, and I'd guess is only matches one or a few rows
from the temp table, this means you're gonna have that one row and a bunch
of null filled rows to go with it.


Re: General performance questions about postgres on Apple hardware...

From
"Simon Riggs"
Date:
>Sean Shanny
> Hardware:  Apple G5 dual 2.0 with 8GB memory attached via dual fibre
> channel to a fully loaded 3.5TB XRaid.  The XRaid is configured as two
7
> disk hardware based RAID5 sets software striped to form a RAID50 set.
> The DB, WALS, etc are all on that file set.  Running OSX journaled
file
> system  Running postgres 7.4.1.  OSX Server 10.3.2  Postgres is
compiled
> locally with  '--enable-recode' '--enable-multibyte=UNICODE'
> 'CFLAGS=-mcpu=970 -mtune=970 -mpowerpc64 -O3'

Have you tried altering the blocksize to a higher value? Say 32K?

> max_connections = 100

Why have you set this to 100 when you have typically 1-3 users?

> sort_mem = 256000               # min 64, size in KB

If you have only 1-3 users, then that value seems reasonable.

> The query is
>
> SELECT t1.id, t2.md5, t2.url FROM referral_temp t2 LEFT OUTER JOIN
> d_referral t1 ON t2.md5 = t1.referral_md5;
>
>
> \d d_referral
>  id                 | integer | not null
>  referral_md5       | text    | not null
>  referral_raw_url   | text    | not null
>  referral_host      | text    |
>  referral_path      | text    |
>  referral_query     | text    |
>  job_control_number | integer | not null
>
>
> \d referral_temp
>  md5    | text |
>  url    | text |

Have you looked at using reversed indexes, as per recent postings in
[performance]? These seemed to help considerably with lookup speed when
using a large URL database, which seems to be your situation here.

...
>Jeff Boes writes
> We have a large (several million row) table with a field containing
> URLs. Now, funny thing about URLs: they mostly start with a common
> substring ("http://www."). But not all the rows start with this, so we
> can't just lop off the first N characters. However, we noticed some
time
> ago that an index on this field wasn't as effective as an index on the
> REVERSE of the field. So ...
>
> CREATE OR REPLACE FUNCTION fn_urlrev(text) returns text as '
> return reverse(lc($_[0]))
> ' language 'plperl' with (iscachable,isstrict);
>
> and then
>
> CREATE UNIQUE INDEX ix_links_3 ON links
> (fn_urlrev(path_base));

You have 2 CPUs: have you tried splitting your input data file into two
tables, then executing the same query simultaneously, to split the
processing? If you get the correct plan, you should use roughly the same
I/O but use all of the available CPU power.

I'm sure we'd all be interested in your further results!

Best Regards, Simon Riggs


Re: General performance questions about postgres on Apple

From
Sean Shanny
Date:
Simon Riggs wrote:

>>Sean Shanny
>>Hardware:  Apple G5 dual 2.0 with 8GB memory attached via dual fibre
>>channel to a fully loaded 3.5TB XRaid.  The XRaid is configured as two
>>
>>
>7
>
>
>>disk hardware based RAID5 sets software striped to form a RAID50 set.
>>The DB, WALS, etc are all on that file set.  Running OSX journaled
>>
>>
>file
>
>
>>system  Running postgres 7.4.1.  OSX Server 10.3.2  Postgres is
>>
>>
>compiled
>
>
>>locally with  '--enable-recode' '--enable-multibyte=UNICODE'
>>'CFLAGS=-mcpu=970 -mtune=970 -mpowerpc64 -O3'
>>
>>
>
>Have you tried altering the blocksize to a higher value? Say 32K?
>
>
That is on our to do list.  We had made that change while running on BSD
5.1on a Dell 2650 with 4GB and 5 10K SCSI drive in RAID 0.  Did not see
a huge improvement.

>
>
>>max_connections = 100
>>
>>
>
>Why have you set this to 100 when you have typically 1-3 users?
>
>
Have already addressed that by lowering this to 50.  Will drop it lower
as time goes on.

>
>
>>sort_mem = 256000               # min 64, size in KB
>>
>>
>
>If you have only 1-3 users, then that value seems reasonable.
>
>
>
>>The query is
>>
>>SELECT t1.id, t2.md5, t2.url FROM referral_temp t2 LEFT OUTER JOIN
>>d_referral t1 ON t2.md5 = t1.referral_md5;
>>
>>
>>\d d_referral
>> id                 | integer | not null
>> referral_md5       | text    | not null
>> referral_raw_url   | text    | not null
>> referral_host      | text    |
>> referral_path      | text    |
>> referral_query     | text    |
>> job_control_number | integer | not null
>>
>>
>>\d referral_temp
>> md5    | text |
>> url    | text |
>>
>>
>
>Have you looked at using reversed indexes, as per recent postings in
>[performance]? These seemed to help considerably with lookup speed when
>using a large URL database, which seems to be your situation here.
>
>
We create an MD5 of the URL and store it as referral_md5.  This is our
key for lookup.  We ran into problems with the URL as the index. The
postgres indexing code was complaining about the URL being too long,
hence the MD5 which thought longer to compute during the ETL phase is
much quicker to match on.

>...
>
>
>>Jeff Boes writes
>>We have a large (several million row) table with a field containing
>>URLs. Now, funny thing about URLs: they mostly start with a common
>>substring ("http://www."). But not all the rows start with this, so we
>>can't just lop off the first N characters. However, we noticed some
>>
>>
>time
>
>
>>ago that an index on this field wasn't as effective as an index on the
>>REVERSE of the field. So ...
>>
>>CREATE OR REPLACE FUNCTION fn_urlrev(text) returns text as '
>>return reverse(lc($_[0]))
>>' language 'plperl' with (iscachable,isstrict);
>>
>>and then
>>
>>CREATE UNIQUE INDEX ix_links_3 ON links
>>(fn_urlrev(path_base));
>>
>>
>
>You have 2 CPUs: have you tried splitting your input data file into two
>tables, then executing the same query simultaneously, to split the
>processing? If you get the correct plan, you should use roughly the same
>I/O but use all of the available CPU power.
>
>
Have not considered that.

>I'm sure we'd all be interested in your further results!
>
>
I will post things as I discover them.

--sean

>Best Regards, Simon Riggs
>
>
>
>