Re: PostgreSQL and Linux 2.6 kernel. - Mailing list pgsql-performance
From | Gary Doades |
---|---|
Subject | Re: PostgreSQL and Linux 2.6 kernel. |
Date | |
Msg-id | E7C9BB6715CC@gpdnet.co.uk Whole thread Raw |
In response to | PostgreSQL and Linux 2.6 kernel. ("Gary Doades" <gpd@gpdnet.co.uk>) |
List | pgsql-performance |
Thanks, I know about set showplan_text, but it is only the equivalent of explain, not explain analyze. The graphical plan gives full statistics, runtime, percentage cost, loop execution counts etc. which is much more useful. I don't know of a way of getting the graphical plan content in text form. Cheers, Gary. On 3 Apr 2004 at 6:50, @g v t c wrote: Use "Set Show_Plan" or something of the sort in Query Analyzer. Then run your SQL. This will change the graphical plan to a text plan similar to Postgresql or at least something close to readable. Gary Doades wrote: >On 2 Apr 2004 at 22:36, pgsql-performance@postgresql. wrote: > >OK, some more detail: > >Before wiping 2.4 off my test box for the second time: > >SQL Statement for update: >update staff_booking set time_from = r.time_from from order_reqt r where r.reqt_id = >staff_booking.reqt_id; > >Explain: (on 2.4) >QUERY PLAN >Merge Join (cost=0.00..185731.30 rows=2845920 width=92) > Merge Cond: ("outer".reqt_id = "inner".reqt_id) > -> Index Scan using order_reqt_pkey on order_reqt r (cost=0.00..53068.20 >rows=2206291 width=6) > -> Index Scan using staff_book_idx2 on staff_booking (cost=0.00..99579.21 >rows=2845920 width=90) > >Total execution time: 18 hours 12 minutes > >vacuum full analyze: total time 3 hours 22 minutes > >Wait 2 hours for re-install 2.6, set params etc. >restore database. > >Same SQL Statement >Explain: (on 2.6) >QUERY PLAN >Merge Join (cost=0.00..209740.24 rows=2845920 width=92) > Merge Cond: ("outer".reqt_id = "inner".reqt_id) > -> Index Scan using order_reqt_pkey on order_reqt r (cost=0.00..50734.20 >rows=2206291 width=6) > -> Index Scan using staff_book_idx2 on staff_booking (cost=0.00..117921.92 >rows=2845920 width=90) > >Total execution time: 2 hours 53 minutes > >vacuum full analyze: total time 1 hours 6 minutes > >Table definitions for the two tables involved: >CREATE TABLE ORDER_REQT >( > REQT_ID SERIAL, > ORDER_ID integer NOT NULL, > DAYOFWEEK smallint NOT NULL CHECK (DAYOFWEEK >BETWEEN 0 AND 6), > TIME_FROM smallint NOT NULL CHECK (TIME_FROM >BETWEEN 0 AND 1439), > DURATION smallint NOT NULL CHECK (DURATION >BETWEEN 0 AND 1439), > PRODUCT_ID integer NOT NULL, > NUMBER_REQT smallint NOT NULL DEFAULT (1), > WROPTIONS integer NOT NULL DEFAULT 0, > UID_REF integer NOT NULL, > DT_STAMP timestamp NOT NULL DEFAULT >current_timestamp, > Sentinel_Priority integer NOT NULL DEFAULT 0, > PERIOD smallint NOT NULL DEFAULT 1 CHECK >(PERIOD BETWEEN -2 AND 4), > FREQUENCY smallint NOT NULL DEFAULT 1, > PRIMARY KEY (REQT_ID) >); > >CREATE TABLE STAFF_BOOKING >( > BOOKING_ID SERIAL, > REQT_ID integer NOT NULL, > ENTITY_TYPE smallint NOT NULL DEFAULT 3 >check(ENTITY_TYPE in(3,4)), > STAFF_ID integer NOT NULL, > CONTRACT_ID integer NOT NULL, > TIME_FROM smallint NOT NULL CHECK (TIME_FROM >BETWEEN 0 AND 1439), > DURATION smallint NOT NULL CHECK (DURATION >BETWEEN 0 AND 1439), > PERIOD smallint NOT NULL DEFAULT 1 CHECK >(PERIOD BETWEEN -2 AND 4), > FREQUENCY smallint NOT NULL DEFAULT 1, > TRAVEL_TO smallint NOT NULL DEFAULT 0, > UID_REF integer NOT NULL, > DT_STAMP timestamp NOT NULL DEFAULT >current_timestamp, > SELL_PRICE numeric(10,4) NOT NULL DEFAULT 0, > COST_PRICE numeric(10,4) NOT NULL DEFAULT 0, > MIN_SELL_PRICE numeric(10,4) NOT NULL DEFAULT 0, > MIN_COST_PRICE numeric(10,4) NOT NULL DEFAULT 0, > Sentinel_Priority integer NOT NULL DEFAULT 0, > CHECK_INTERVAL smallint NOT NULL DEFAULT 0, > STATUS smallint NOT NULL DEFAULT 0, > WROPTIONS integer NOT NULL DEFAULT 0, > PRIMARY KEY (BOOKING_ID) >); > >Foreign keys: > >ALTER TABLE ORDER_REQT ADD > FOREIGN KEY > ( > ORDER_ID > ) REFERENCES MAIN_ORDER ( > ORDER_ID > ) ON DELETE CASCADE; > >ALTER TABLE ORDER_REQT ADD > FOREIGN KEY > ( > PRODUCT_ID > ) REFERENCES PRODUCT ( > PRODUCT_ID > ); > >ALTER TABLE STAFF_BOOKING ADD > FOREIGN KEY > ( > CONTRACT_ID > ) REFERENCES STAFF_CONTRACT ( > CONTRACT_ID > ); > >ALTER TABLE STAFF_BOOKING ADD > FOREIGN KEY > ( > STAFF_ID > ) REFERENCES STAFF ( > STAFF_ID > ); > > >Indexes: > >CREATE INDEX FK_IDX_ORDER_REQT > ON ORDER_REQT > ( > ORDER_ID > ); > >CREATE INDEX FK_IDX_ORDER_REQT_2 > ON ORDER_REQT > ( > PRODUCT_ID > ); > >CREATE INDEX ORDER_REQT_IDX ON ORDER_REQT >( > ORDER_ID, > PRODUCT_ID >); > >CREATE INDEX ORDER_REQT_IDX4 ON ORDER_REQT >( > REQT_ID, > TIME_FROM, > DURATION >); > >CREATE INDEX FK_IDX_STAFF_BOOKING > ON STAFF_BOOKING > ( > CONTRACT_ID > ); > >CREATE INDEX FK_IDX_STAFF_BOOKING_2 > ON STAFF_BOOKING > ( > STAFF_ID > ); > >CREATE INDEX STAFF_BOOK_IDX1 ON STAFF_BOOKING >( > STAFF_ID, > REQT_ID >); > >CREATE INDEX STAFF_BOOK_IDX2 ON STAFF_BOOKING >( > REQT_ID >); > >CREATE INDEX STAFF_BOOK_IDX3 ON STAFF_BOOKING >( > BOOKING_ID, > REQT_ID >); > > >CREATE INDEX STAFF_BOOK_IDX4 ON STAFF_BOOKING >( > BOOKING_ID, > CONTRACT_ID >); > >There are no indexes on the columns involved in the update, they are >not required for my usual select statements. This is an attempt to >slightly denormalise the design to get the performance up comparable >to SQL Server 2000. We hope to move some of our databases over to >PostgreSQL later in the year and this is part of the ongoing testing. >SQLServer's query optimiser is a bit smarter that PostgreSQL's (yet) >so I am hand optimising some of the more frequently used >SQL and/or tweaking the database design slightly. > >Later, after deciphering SQLServers graphical plans I will attempt to >post comparitive performance/access plans, using the same data of >course, if anyone would be interested.... > >Cheers, >Gary. > > > >On 2 Apr 2004 at 1:32, Tom Lane wrote: > > > >>"Gary Doades" <gpd@gpdnet.co.uk> writes: >> >> >>>As a test in PosgreSQL I issued a statement to update a single column >>>of a table containing 2.8 million rows with the values of a column in >>>a table with similar rowcount. Using the above spec I had to stop the >>>server after 17 hours. The poor thing was thrashing the hard disk and >>>doing more swapping than useful work. >>> >>> >> >>This statement is pretty much content-free, since you did not show us >>the table schemas, the query, or the EXPLAIN output for the query. >>(I'll forgive you the lack of EXPLAIN ANALYZE, but you could easily >>have provided all the other hard facts.) There's really no way to tell >>where the bottleneck is. Maybe it's a kernel-level issue, but I would >>not bet on that without more evidence. I'd definitely not bet on it >>without direct confirmation that the same query plan was used in both >>setups. >> >> regards, tom lane >> >>---------------------------(end of broadcast)--------------------------- >>TIP 3: if posting/reading through Usenet, please send an appropriate >> subscribe-nomail command to majordomo@postgresql.org so that your >> message can get through to the mailing list cleanly >> >> >>-- >>Incoming mail is certified Virus Free. >>Checked by AVG Anti-Virus (http://www.grisoft.com). >>Version: 7.0.230 / Virus Database: 262.6.5 - Release Date: 31/03/2004 >> >> >> > > > >---------------------------(end of broadcast)--------------------------- >TIP 2: you can get off all lists at once with the unregister command > (send "unregister YourEmailAddressHere" to majordomo@postgresql.org) > > > > -- Incoming mail is certified Virus Free. Checked by AVG Anti-Virus (http://www.grisoft.com). Version: 7.0.230 / Virus Database: 262.6.5 - Release Date: 31/03/2004
pgsql-performance by date: