Thread: Sun vs a P2. Interesting results.
Here's an interesting situation, and I think it may be just that Sun stinks. I was recently given the go ahead to switch from Informix to Postgres on one of our properties. (I had dozens of performance comparisons showing how slow Informix was compared to it and my boss seeing me struggle trying to make it run fast while Postgres, nearly out of the box, was simply spanking it.). Well, in order to facilitate things we were going to run pg on a 4 cpu (ultrasparc ii 400Mhz) sun with 4gb of memory (also the current informix box. It isn't very loaded). Now I know FreeBSD/Linux is preferred (and where I do a lot of development and testing). But check this out for interesting results. The Hardware: Machine A: 4 CPU Sun Ultrasparc II 400Mhz, 4GB mem, 20GB RAID5, Solaris 8 (32 bit mode) Machine B: 2 CPU Pentium II, 450Mhz, 512MB mem, 18GB RAID0 (2 old scsi disks) Linux 2.4.18 (Stock redhat 8.0) The software: PG 7.3.4 compiled myself. (Reading specs from /opt/sfw/lib/gcc-lib/sparc-sun-solaris2.9/2.95.3/specs gcc version 2.95.3 20010315 (release) (The solaris 8 box has no compilers, could this be the issue?) and (Reading specs from /usr/lib/gcc-lib/i386-redhat-linux/3.2/specs Configured with: ../configure --prefix=/usr --mandir=/usr/share/man --infodir=/u sr/share/info --enable-shared --enable-threads=posix --disable-checking --host=i 386-redhat-linux --with-system-zlib --enable-__cxa_atexit Thread model: posix gcc version 3.2 20020903 (Red Hat Linux 8.0 3.2-7)) Ok. Maybe the compiler (I'll try installing a newer gcc for sun later today). The PG.conf: shared_buffers = 2000 sort_mem = 8192 effective_cache_size = 32000 tcpip_sockets = true The Schema: userprofile: userkey | character varying(128) | displayname | character varying(128) | displayname_v | boolean | default 'f' name | character varying(128) | name_v | boolean | default 'f' email | character varying(128) | email_v | boolean | default 'f' gender | character varying(1) | gender_v | boolean | default 'f' country | character varying(64) | country_v | boolean | default 'f' zip | character varying(10) | zip_v | boolean | default 'f' city | character varying(128) | city_v | boolean | default 'f' state | character varying(10) | state_v | boolean | default 'f' lang | character varying(2) | lang_v | boolean | default 'f' url | character varying(255) | url_v | boolean | default 'f' phone | character varying(64) | phone_v | boolean | default 'f' phonemobile | character varying(64) | phonemobile_v | boolean | default 'f' phonefax | character varying(64) | phonefax_v | boolean | default 'f' dob | timestamp with time zone | dob_v | boolean | default 'f' interests_v | boolean | default 'f' description | character varying(255) | description2 | character varying(255) | description_v | boolean | default 'f' (Yes, I kknow it isn't good - a lot of it is because it is the same schema I had to use on informix. Convienantly you cannot do much with a textblob on infomrix, so I have to use big varchar's, but that is a fiffernt story). The magic query: select userkey, dob, email, gender, country from imuserprofile where gender_v and gender='m' and country_v and country = 'br' and dob_v = 't' and dob >= 'now'::timestamptz - '29 years'::interval and dob <= 'now'::timestamptz - '18 years'::interval order by dob asc limit 20 offset 100 (Page 5 of male brazillians, 18-29) Now the P2 runs this in about 0.3 seconds, and hte sun box runs it in 1 second. Here's the explain analyze's on each: P2: Limit (cost=2484.52..2484.57 rows=20 width=67) (actual time=377.32..377.41 row s=20 loops=1) -> Sort (cost=2484.27..2484.74 rows=186 width=67) (actual time=377.02..377. 21 rows=121 loops=1) Sort Key: dob -> Seq Scan on userprofile (cost=0.00..2477.28 rows=186 width=67) ( actual time=0.15..350.93 rows=1783 loops=1) Filter: (gender_v AND (gender = 'm'::character varying) AND count ry_v AND (country = 'br'::character varying) AND (dob_v = true) AND (dob >= '197 4-08-26 07:13:15.903437-04'::timestamp with time zone) AND (dob <= '1985-08-26 0 7:13:15.903437-04'::timestamp with time zone)) Total runtime: 378.21 msec (6 rows) Sun: Limit (cost=2521.19..2521.24 rows=20 width=67) (actual time=1041.14..1041.20 r ows=20 loops=1) -> Sort (cost=2520.94..2521.39 rows=178 width=67) (actual time=1040.96..104 1.08 rows=121 loops=1) Sort Key: dob -> Seq Scan on userprofile (cost=0.00..2514.28 rows=178 width=67) ( actual time=0.37..1014.50 rows=1783 loops=1) Filter: (gender_v AND (gender = 'm'::character varying) AND count ry_v AND (country = 'br'::character varying) AND (dob_v = true) AND (dob >= '197 4-08-26 08:21:52.158181-04'::timestamp with time zone) AND (dob <= '1985-08-26 0 8:21:52.158181-04'::timestamp with time zone)) Total runtime: 1042.54 msec (6 rows) They are loaded with the exact same dataset - 53k rows, ~10MB Notice the estimates are roughly the same, but the execution time is different. I don't think it is the IO system, since 10MB will be cached by the OS and iostat reports no activity on the disks (when running the query many times over and over and in parellel). it is a simple query.. Could it just be that the sun sucks? (And for the record - same schema, nearly same query (modified for datetime syntax) on informix runs in 3 seconds). -- Jeff Trout <jeff@jefftrout.com> http://www.jefftrout.com/ http://www.stuarthamm.net/
I spoke with my SUN admin, and this is what he had to say about what you are seeing. Sun gear is known to show a lower than Intel performance on light loads, rerun your test with 100 concurrent users (queries) and see what happens. Also he recommends installing a 64bit version of Solaris, the 32bit robs a lot of performance as well. On Tuesday 26 August 2003 05:34, Jeff wrote: > Here's an interesting situation, and I think it may be just that Sun > stinks. > > I was recently given the go ahead to switch from Informix to Postgres on > one of our properties. (I had dozens of performance comparisons showing > how slow Informix was compared to it and my boss seeing me struggle trying > to make it run fast while Postgres, nearly out of the box, was simply > spanking it.). > > > Well, in order to facilitate things we were going to run pg on a 4 cpu > (ultrasparc ii 400Mhz) sun with 4gb of memory (also the current informix > box. It isn't very loaded). Now I know FreeBSD/Linux is preferred (and > where I do a lot of development and testing). But check this out for > interesting results. > > The Hardware: > Machine A: 4 CPU Sun Ultrasparc II 400Mhz, 4GB mem, 20GB RAID5, Solaris 8 > (32 bit mode) > > Machine B: 2 CPU Pentium II, 450Mhz, 512MB mem, 18GB RAID0 (2 old scsi > disks) Linux 2.4.18 (Stock redhat 8.0) > > The software: PG 7.3.4 compiled myself. (Reading specs from > /opt/sfw/lib/gcc-lib/sparc-sun-solaris2.9/2.95.3/specs gcc version 2.95.3 > 20010315 (release) (The solaris 8 box has no compilers, could this be the > issue?) and (Reading specs from > /usr/lib/gcc-lib/i386-redhat-linux/3.2/specs > Configured with: ../configure --prefix=/usr --mandir=/usr/share/man > --infodir=/u > sr/share/info --enable-shared --enable-threads=posix --disable-checking > --host=i > 386-redhat-linux --with-system-zlib --enable-__cxa_atexit > Thread model: posix > gcc version 3.2 20020903 (Red Hat Linux 8.0 3.2-7)) > > Ok. Maybe the compiler (I'll try installing a newer gcc for sun later > today). > > The PG.conf: > shared_buffers = 2000 > sort_mem = 8192 > effective_cache_size = 32000 > tcpip_sockets = true > > The Schema: > > userprofile: > > userkey | character varying(128) | > displayname | character varying(128) | > displayname_v | boolean | default 'f' > name | character varying(128) | > name_v | boolean | default 'f' > email | character varying(128) | > email_v | boolean | default 'f' > gender | character varying(1) | > gender_v | boolean | default 'f' > country | character varying(64) | > country_v | boolean | default 'f' > zip | character varying(10) | > zip_v | boolean | default 'f' > city | character varying(128) | > city_v | boolean | default 'f' > state | character varying(10) | > state_v | boolean | default 'f' > lang | character varying(2) | > lang_v | boolean | default 'f' > url | character varying(255) | > url_v | boolean | default 'f' > phone | character varying(64) | > phone_v | boolean | default 'f' > phonemobile | character varying(64) | > phonemobile_v | boolean | default 'f' > phonefax | character varying(64) | > phonefax_v | boolean | default 'f' > dob | timestamp with time zone | > dob_v | boolean | default 'f' > interests_v | boolean | default 'f' > description | character varying(255) | > description2 | character varying(255) | > description_v | boolean | default 'f' > > (Yes, I kknow it isn't good - a lot of it is because it is the same schema > I had to use on informix. Convienantly you cannot do much with a textblob > on infomrix, so I have to use big varchar's, but that is a fiffernt > story). > > The magic query: > > select userkey, dob, email, gender, country from imuserprofile > where gender_v and gender='m' > and country_v and country = 'br' > and dob_v = 't' > and dob >= 'now'::timestamptz - '29 years'::interval > and dob <= 'now'::timestamptz - '18 years'::interval > order by dob asc > limit 20 > offset 100 > > (Page 5 of male brazillians, 18-29) > > Now the P2 runs this in about 0.3 seconds, and hte sun box runs it in 1 > second. > Here's the explain analyze's on each: > > P2: > Limit (cost=2484.52..2484.57 rows=20 width=67) (actual > time=377.32..377.41 row > s=20 loops=1) > -> Sort (cost=2484.27..2484.74 rows=186 width=67) (actual > time=377.02..377. > 21 rows=121 loops=1) > Sort Key: dob > -> Seq Scan on userprofile (cost=0.00..2477.28 rows=186 > width=67) ( > actual time=0.15..350.93 rows=1783 loops=1) > Filter: (gender_v AND (gender = 'm'::character varying) AND > count > ry_v AND (country = 'br'::character varying) AND (dob_v = true) AND (dob > > >= '197 > > 4-08-26 07:13:15.903437-04'::timestamp with time zone) AND (dob <= > '1985-08-26 0 > 7:13:15.903437-04'::timestamp with time zone)) > Total runtime: 378.21 msec > (6 rows) > > Sun: > Limit (cost=2521.19..2521.24 rows=20 width=67) (actual > time=1041.14..1041.20 r > ows=20 loops=1) > -> Sort (cost=2520.94..2521.39 rows=178 width=67) (actual > time=1040.96..104 > 1.08 rows=121 loops=1) > Sort Key: dob > -> Seq Scan on userprofile (cost=0.00..2514.28 rows=178 > width=67) ( > actual time=0.37..1014.50 rows=1783 loops=1) > Filter: (gender_v AND (gender = 'm'::character varying) AND > count > ry_v AND (country = 'br'::character varying) AND (dob_v = true) AND (dob > > >= '197 > > 4-08-26 08:21:52.158181-04'::timestamp with time zone) AND (dob <= > '1985-08-26 0 > 8:21:52.158181-04'::timestamp with time zone)) > Total runtime: 1042.54 msec > (6 rows) > > They are loaded with the exact same dataset - 53k rows, ~10MB > Notice the estimates are roughly the same, but the execution time is > different. > > I don't think it is the IO system, since 10MB will be cached by the OS and > iostat reports no activity on the disks (when running the query many > times over and over and in parellel). it is a simple query.. > > Could it just be that the sun sucks? (And for the record - same schema, > nearly same query (modified for datetime syntax) on informix runs in 3 > seconds). -- Darcy Buskermolen Wavefire Technologies Corp. ph: 250.717.0200 fx: 250.763.1759 http://www.wavefire.com
Also, after having taken another look at this, you aren't preforming the same query on both datasets, so you can't expect them to generate the same results, or the same query plans, or even comparable times. Please retry your tests with identical queries , specify the dates, don;t use a function like now() to retrieve them. On Tuesday 26 August 2003 05:34, Jeff wrote: > Here's an interesting situation, and I think it may be just that Sun > stinks. > > I was recently given the go ahead to switch from Informix to Postgres on > one of our properties. (I had dozens of performance comparisons showing > how slow Informix was compared to it and my boss seeing me struggle trying > to make it run fast while Postgres, nearly out of the box, was simply > spanking it.). > > > Well, in order to facilitate things we were going to run pg on a 4 cpu > (ultrasparc ii 400Mhz) sun with 4gb of memory (also the current informix > box. It isn't very loaded). Now I know FreeBSD/Linux is preferred (and > where I do a lot of development and testing). But check this out for > interesting results. > > The Hardware: > Machine A: 4 CPU Sun Ultrasparc II 400Mhz, 4GB mem, 20GB RAID5, Solaris 8 > (32 bit mode) > > Machine B: 2 CPU Pentium II, 450Mhz, 512MB mem, 18GB RAID0 (2 old scsi > disks) Linux 2.4.18 (Stock redhat 8.0) > > The software: PG 7.3.4 compiled myself. (Reading specs from > /opt/sfw/lib/gcc-lib/sparc-sun-solaris2.9/2.95.3/specs gcc version 2.95.3 > 20010315 (release) (The solaris 8 box has no compilers, could this be the > issue?) and (Reading specs from > /usr/lib/gcc-lib/i386-redhat-linux/3.2/specs > Configured with: ../configure --prefix=/usr --mandir=/usr/share/man > --infodir=/u > sr/share/info --enable-shared --enable-threads=posix --disable-checking > --host=i > 386-redhat-linux --with-system-zlib --enable-__cxa_atexit > Thread model: posix > gcc version 3.2 20020903 (Red Hat Linux 8.0 3.2-7)) > > Ok. Maybe the compiler (I'll try installing a newer gcc for sun later > today). > > The PG.conf: > shared_buffers = 2000 > sort_mem = 8192 > effective_cache_size = 32000 > tcpip_sockets = true > > The Schema: > > userprofile: > > userkey | character varying(128) | > displayname | character varying(128) | > displayname_v | boolean | default 'f' > name | character varying(128) | > name_v | boolean | default 'f' > email | character varying(128) | > email_v | boolean | default 'f' > gender | character varying(1) | > gender_v | boolean | default 'f' > country | character varying(64) | > country_v | boolean | default 'f' > zip | character varying(10) | > zip_v | boolean | default 'f' > city | character varying(128) | > city_v | boolean | default 'f' > state | character varying(10) | > state_v | boolean | default 'f' > lang | character varying(2) | > lang_v | boolean | default 'f' > url | character varying(255) | > url_v | boolean | default 'f' > phone | character varying(64) | > phone_v | boolean | default 'f' > phonemobile | character varying(64) | > phonemobile_v | boolean | default 'f' > phonefax | character varying(64) | > phonefax_v | boolean | default 'f' > dob | timestamp with time zone | > dob_v | boolean | default 'f' > interests_v | boolean | default 'f' > description | character varying(255) | > description2 | character varying(255) | > description_v | boolean | default 'f' > > (Yes, I kknow it isn't good - a lot of it is because it is the same schema > I had to use on informix. Convienantly you cannot do much with a textblob > on infomrix, so I have to use big varchar's, but that is a fiffernt > story). > > The magic query: > > select userkey, dob, email, gender, country from imuserprofile > where gender_v and gender='m' > and country_v and country = 'br' > and dob_v = 't' > and dob >= 'now'::timestamptz - '29 years'::interval > and dob <= 'now'::timestamptz - '18 years'::interval > order by dob asc > limit 20 > offset 100 > > (Page 5 of male brazillians, 18-29) > > Now the P2 runs this in about 0.3 seconds, and hte sun box runs it in 1 > second. > Here's the explain analyze's on each: > > P2: > Limit (cost=2484.52..2484.57 rows=20 width=67) (actual > time=377.32..377.41 row > s=20 loops=1) > -> Sort (cost=2484.27..2484.74 rows=186 width=67) (actual > time=377.02..377. > 21 rows=121 loops=1) > Sort Key: dob > -> Seq Scan on userprofile (cost=0.00..2477.28 rows=186 > width=67) ( > actual time=0.15..350.93 rows=1783 loops=1) > Filter: (gender_v AND (gender = 'm'::character varying) AND > count > ry_v AND (country = 'br'::character varying) AND (dob_v = true) AND (dob > > >= '197 > > 4-08-26 07:13:15.903437-04'::timestamp with time zone) AND (dob <= > '1985-08-26 0 > 7:13:15.903437-04'::timestamp with time zone)) > Total runtime: 378.21 msec > (6 rows) > > Sun: > Limit (cost=2521.19..2521.24 rows=20 width=67) (actual > time=1041.14..1041.20 r > ows=20 loops=1) > -> Sort (cost=2520.94..2521.39 rows=178 width=67) (actual > time=1040.96..104 > 1.08 rows=121 loops=1) > Sort Key: dob > -> Seq Scan on userprofile (cost=0.00..2514.28 rows=178 > width=67) ( > actual time=0.37..1014.50 rows=1783 loops=1) > Filter: (gender_v AND (gender = 'm'::character varying) AND > count > ry_v AND (country = 'br'::character varying) AND (dob_v = true) AND (dob > > >= '197 > > 4-08-26 08:21:52.158181-04'::timestamp with time zone) AND (dob <= > '1985-08-26 0 > 8:21:52.158181-04'::timestamp with time zone)) > Total runtime: 1042.54 msec > (6 rows) > > They are loaded with the exact same dataset - 53k rows, ~10MB > Notice the estimates are roughly the same, but the execution time is > different. > > I don't think it is the IO system, since 10MB will be cached by the OS and > iostat reports no activity on the disks (when running the query many > times over and over and in parellel). it is a simple query.. > > Could it just be that the sun sucks? (And for the record - same schema, > nearly same query (modified for datetime syntax) on informix runs in 3 > seconds). -- Darcy Buskermolen Wavefire Technologies Corp. ph: 250.717.0200 fx: 250.763.1759 http://www.wavefire.com
On Tue, 26 Aug 2003, Darcy Buskermolen wrote: > Also, after having taken another look at this, you aren't preforming the same > query on both datasets, so you can't expect them to generate the same > results, or the same query plans, or even comparable times. Please retry your > tests with identical queries , specify the dates, don;t use a function like > now() to retrieve them. > Given what you said in the previous email and this one here's some new information. I redid the query to use a static starting time and I ran 19 beaters in parallel. After I send this mail out I'll try it with 40. New Query: select userkey, dob, email, gender, country from userprofile where gender_v and gender='m' and country_v and country = 'br' and dob_v and dob >= '2003-08-26'::timestamptz - '29 years'::interval and dob <= '2003-08-26'::timestamptz - '18 years'::interval order by dob asc limit 20 offset 100 Explain Analyze's: (basically the same) Sun: Limit (cost=2390.05..2390.10 rows=20 width=67) (actual time=1098.34..1098.39 rows=20 loops=1) -> Sort (cost=2389.80..2390.24 rows=178 width=67) (actual time=1098.16..1098.28 rows=121 loops=1) Sort Key: dob -> Seq Scan on imuserprofile (cost=0.00..2383.14 rows=178 width=67) (actual time=0.38..1068.94 rows=1783 loops=1) Filter: (gender_v AND (gender = 'm'::character varying) AND country_v AND (country = 'br'::character varying) AND dob_v AND (dob >= '1974-08-26 00:00:00-04'::timestamp with time zone) AND (dob <= '1985-08-26 00:00:00-04'::timestamp with time zone)) Total runtime: 1099.93 msec (6 rows) p2 Limit (cost=2353.38..2353.43 rows=20 width=67) (actual time=371.75..371.83 rows=20 loops=1) -> Sort (cost=2353.13..2353.60 rows=186 width=67) (actual time=371.46..371.63 rows=121 loops=1) Sort Key: dob -> Seq Scan on imuserprofile (cost=0.00..2346.14 rows=186 width=67) (actual time=0.17..345.53 rows=1783 loops=1) Filter: (gender_v AND (gender = 'm'::character varying) AND country_v AND (country = 'br'::character varying) AND dob_v AND (dob >= '1974-08-26 00:00:00-04'::timestamp with time zone) AND (dob <= '1985-08-26 00:00:00-04'::timestamp with time zone)) Total runtime: 372.63 msec (6 rows) I ran this query 100 times per beater (no prepared queries) and ran 19 beaters in parellel. P2 Machine: 345sec avg Sun: 565sec avg I know solaris/sun isn't the preferred pg platform, and we have plenty of capicty even with these numbers, I just find it a little suprising the speed difference. -- Jeff Trout <jeff@jefftrout.com> http://www.jefftrout.com/ http://www.stuarthamm.net/
I'm still seeing differences in the planner estimates, have you run a VACUUM ANALYZE prior to running these tests? Also, are the disk subsystems in these 2 systems the same? You may be seeing some discrepancies in things spindle speed, U160 vs U320, throughput on specific RAID controlers, different blocksize, ect. On Tuesday 26 August 2003 11:41, Jeff wrote: > On Tue, 26 Aug 2003, Darcy Buskermolen wrote: > > Also, after having taken another look at this, you aren't preforming the > > same query on both datasets, so you can't expect them to generate the > > same results, or the same query plans, or even comparable times. Please > > retry your tests with identical queries , specify the dates, don;t use a > > function like now() to retrieve them. > > Given what you said in the previous email and this one here's some new > information. I redid the query to use a static starting time and I ran > 19 beaters in parallel. After I send this mail out I'll try it with 40. > > New Query: > > select userkey, dob, email, gender, country from userprofile > where gender_v and gender='m' > and country_v and country = 'br' > and dob_v > and dob >= '2003-08-26'::timestamptz - '29 > years'::interval > and dob <= '2003-08-26'::timestamptz - '18 years'::interval > order by dob asc > limit 20 > offset 100 > > Explain Analyze's: (basically the same) > Sun: > Limit (cost=2390.05..2390.10 rows=20 width=67) (actual > time=1098.34..1098.39 rows=20 loops=1) > -> Sort (cost=2389.80..2390.24 rows=178 width=67) (actual > time=1098.16..1098.28 rows=121 loops=1) > Sort Key: dob > -> Seq Scan on imuserprofile (cost=0.00..2383.14 rows=178 > width=67) (actual time=0.38..1068.94 rows=1783 loops=1) > Filter: (gender_v AND (gender = 'm'::character varying) AND > country_v AND (country = 'br'::character varying) AND dob_v AND (dob >= > '1974-08-26 00:00:00-04'::timestamp with time zone) AND (dob <= > '1985-08-26 00:00:00-04'::timestamp with time zone)) > Total runtime: 1099.93 msec > (6 rows) > > > p2 > > Limit (cost=2353.38..2353.43 rows=20 width=67) (actual > time=371.75..371.83 rows=20 loops=1) > -> Sort (cost=2353.13..2353.60 rows=186 width=67) (actual > time=371.46..371.63 rows=121 loops=1) > Sort Key: dob > -> Seq Scan on imuserprofile (cost=0.00..2346.14 rows=186 > width=67) (actual time=0.17..345.53 rows=1783 loops=1) > Filter: (gender_v AND (gender = 'm'::character varying) AND > country_v AND (country = 'br'::character varying) AND dob_v AND (dob >= > '1974-08-26 00:00:00-04'::timestamp with time zone) AND (dob <= > '1985-08-26 00:00:00-04'::timestamp with time zone)) > Total runtime: 372.63 msec > (6 rows) > > > I ran this query 100 times per beater (no prepared queries) and ran > 19 beaters in parellel. > > P2 Machine: 345sec avg > Sun: 565sec avg > > > > I know solaris/sun isn't the preferred pg platform, and we have plenty of > capicty even with these numbers, I just find it a little suprising the > speed difference. -- Darcy Buskermolen Wavefire Technologies Corp. ph: 250.717.0200 fx: 250.763.1759 http://www.wavefire.com
On Tue, 26 Aug 2003, Darcy Buskermolen wrote: > I'm still seeing differences in the planner estimates, have you run a VACUUM > ANALYZE prior to running these tests? > I did. I shall retry that.. but the numbers (the cost estimates) are pretty close on both. the actual times are very different. > Also, are the disk subsystems in these 2 systems the same? You may be seeing > some discrepancies in things spindle speed, U160 vs U320, throughput on > specific RAID controlers, different blocksize, ect. > As I said in my first email IO isn't the problem here - the data set is small enough that it is all cached (~10MB). iostat reports 0 activity on the disks on both the sun and p2. and I just ran teh test again with 40 clients: 730s for hte p2, 1100 for the sun. (0% idle on both of them, no IO). I think the next I may try is recompiling with a newer gcc. -- Jeff Trout <jeff@jefftrout.com> http://www.jefftrout.com/ http://www.stuarthamm.net/
On Tue, Aug 26, 2003 at 03:05:12PM -0400, Jeff wrote: > On Tue, 26 Aug 2003, Darcy Buskermolen wrote: > > I'm still seeing differences in the planner estimates, have you run a VACUUM > > ANALYZE prior to running these tests? > > > I did. I shall retry that.. but the numbers (the cost estimates) are > pretty close on both. the actual times are very different. I don't see why you need to bother, the query plans & cost estimates are similar enough I doubt that's the problem. > As I said in my first email IO isn't the problem here - the data set is > small enough that it is all cached (~10MB). iostat reports 0 activity on > the disks on both the sun and p2. Would it be possible to get a profile (e.g. gprof output) for a postgres backend executing the query on the Sun machine? -Neil
On 26 Aug 2003 at 8:34, Jeff wrote: > Could it just be that the sun sucks? (And for the record - same schema, > nearly same query (modified for datetime syntax) on informix runs in 3 > seconds). My impression is IPC on sun has higher initial latency than linux. But given that you also ran the tests with multiple connections and results were of similar pattern, I can't help to conclude that postgresql has to find a way to have faste IPC on solaris. I know I will be flamed for repeatedly making this suggestion. But what does tests with sparc linux on same machine yield? Bye Shridhar -- untold wealth, n.: What you left out on April 15th.
Well, installing gcc 3.3.1 and using -mcpu=v9 didn't help. in fact it made things worse. Unless someone has something clever I'm just gonna stop tinkering with it - my goal was met (it is several orders of magnitude faster than informix ) and the hardware is being replaced in a month or two. thanks for the ideas / comments. -- Jeff Trout <jeff@jefftrout.com> http://www.jefftrout.com/ http://www.stuarthamm.net/
On Tue, 26 Aug 2003, Neil Conway wrote: > > Would it be possible to get a profile (e.g. gprof output) for a postgres > backend executing the query on the Sun machine? > Heh. Never thought of doing a profile! I attached the entire gprof output, but here's the top few functions. I did the test, 1 beater, 100 searches: 148 seconds total. 30.9 45.55 45.55 nocachegetattr [16] 16.0 69.20 23.65 internal_mcount [22] 6.9 79.37 10.17 5245902 0.00 0.00 heapgettup [21] 6.0 88.28 8.91 3663201 0.00 0.00 ExecMakeFunctionResult <cycle 5> [23] 5.4 96.27 7.99 11431400 0.00 0.00 ExecEvalVar [25] 3.0 100.73 4.46 18758201 0.00 0.00 ExecEvalExpr <cycle 5 > [24] 3.0 105.17 4.44 5246005 0.00 0.00 AllocSetReset [29] 2.5 108.89 3.72 5245700 0.00 0.00 HeapTupleSatisfiesSnapshot [30] 2.0 111.78 2.89 5650632 0.00 0.00 LWLockRelease [32] 1.6 114.10 2.32 5650632 0.00 0.00 LWLockAcquire [34] 1.6 116.40 2.30 5245800 0.00 0.01 SeqNext [17] 1.4 118.54 2.14 5438301 0.00 0.00 ExecStoreTuple [27] 1.4 120.62 2.08 5245700 0.00 0.01 ExecQual [18] 1.3 122.50 1.88 5379202 0.00 0.00 ReleaseAndReadBuffer [35] 1.1 124.16 1.66 178400 0.01 0.40 ExecScan [15] 1.1 125.80 1.64 _mcount (6247) 1.1 127.41 1.61 5245902 0.00 0.01 heap_getnext [20] .. as it turns out the profile gzipped is still huge (100kb) so I put it on my web server - snag it at http://www.jefftrout.com/~threshar/postgres/postgres-7.3.4-sol8-gprof.txt.gz I'll do a profile for hte p2 and send post that in an hour or two -- Jeff Trout <jeff@jefftrout.com> http://www.jefftrout.com/ http://www.stuarthamm.net/
Jeff <threshar@torgo.978.org> writes: > I'll do a profile for hte p2 and send post that in an hour or two Please redo the linux profile after recompiling postmaster.c with -DLINUX_PROFILE added (I use "make PROFILE='-pg -DLINUX_PROFILE'" when building for profile on Linux). regards, tom lane