Thread: Re-Create Table make Faster.

Re-Create Table make Faster.

From
"Yudha Setiawan"
Date:
Dear Milist,
 
One day i got a problem with my table,
It's have 35.000 records and 33 Fields.
I tried to do Update a 30% records from
it. And I also have an Match Index with
my condition on Update. Before Updating
the table, I did the Vacuum and Reindex.
That table, But it's still taken a long
time it's about 30 Minutes. I've tried
Vacuum and Reindex many times for that.
But it still take a long-long time. And
Finaly in the middle of my confusing and
disperate i tried to drop and recreate my
table again. And my "Update" is walking
so fast. But i still confusing of how
could be like that...????, Somebody give
me some explanation please.
 
Here it is my Structure of My Table
 
---------------+-----------------------------+----------------
 fc_branch     | character(6)                | not null
 fc_stockcode  | character(20)               | not null
 fv_stockname  | character varying(40)       | not null
 fm_valuestock | numeric(30,5)               | default 0
 fm_hpp        | numeric(30,5)               | default 0
 fn_onhand     | numeric(8,0)                | default 0
 fn_allocated  | numeric(8,0)                | default 0
 fn_valuealloc | numeric(18,0)               | default 0
 fn_tmpalloca  | numeric(8,0)                | default 0
 fv_colorname  | character varying(15)       |
 fv_colorcode  | character varying(15)       |
 fd_lastupdate | timestamp without time zone |
 fd_inputdate  | timestamp without time zone |
 fv_updateby   | character varying(8)        |
 fd_lastsales  | timestamp without time zone |
 fd_lastpurch  | timestamp without time zone |
 fc_divisi     | character(2)                |
 fc_brand      | character(2)                | not null
 fc_group      | character(2)                | not null
 fc_subgrp     | character(2)                | not null
 fc_type       | character(2)                | not null
 fc_pack       | character(2)                | not null
 fn_reorder    | numeric(8,0)                | default 0
 fn_outstpurch | numeric(8,0)                | default 0
 fn_outstsales | numeric(8,0)                | default 0
 fn_uninvoiced | numeric(18,0)               | default 0
 fn_valueuninv | numeric(18,0)               | default 0
 fn_openblnc   | numeric(8,0)                | default 0
 fn_maxpurch   | numeric(8,0)                | default 0
 fn_minpurch   | numeric(8,0)                | default 0
 fn_maxsales   | numeric(8,0)                | default 0
 fn_minsales   | numeric(8,0)                | default 0
 fn_maxstock   | numeric(8,0)                | default 0
 fn_minstock   | numeric(8,0)                | default 0
 fc_gradeinout | character(6)                | default 'SLOW'
 fc_hold       | character(3)                | default 'NO'
 fc_pictureclr | character(3)                | default 'NO'
 fc_report     | character(3)                | default 'NO'
 fn_volume     | numeric(9,5)                | default 0
 fm_lasthpp    | numeric(30,5)               | default 0
 fm_lastprice  | numeric(30,5)               | default 0
 fn_lastdisc1  | numeric(18,0)               | default 0
 fn_lastdisc2  | numeric(18,0)               | default 0
 fn_lastdisc3  | numeric(18,0)               | default 0
 ft_note       | text                        |
Indexes: pk_t_stock1 primary key btree (fc_branch, fc_stockcode),
         i_stock01 btree (fc_branch, fc_stockcode),
         i_stock02 btree (fc_branch, fv_stockname)        
         i_stock03 btree (fc_branch, fc_group)
And this is my "Update",
 
Update t_stock set fc_onhand = 50 where fc_branch = 'ABCDE' and fc_group = 'G1';
 
Thank's and GOD Bless U all.
 
 

Automatic error rollback can be harmful

From
"Wayne Armstrong"
Date:
Hi,
 I have been using postgresql for about 3 months now (after about 15 years of
using various commercial databases).
I love it :) but - (and you just knew that but was coming :) there is one thing
that really causes me grief
.
It is the tendency for postgres to rollback automatically on some errors.

What this leads to is the need for an application to commit more frequently
tahn is really desirable.

Example1.
 During import of 120 thousand records from an isam file system, 3 say records
fail integrity checks ( files in non-database systems tend not to have
referential integrity implemented on them except at an application level). The
desired result is to drop the records failing integrity checks. Importing into
db2 or oracle say - I have the option to ignore the referential integrity
errors (i get the error code on the insert anyway), and continue with the
insert of the 120 thousand - 3 records.  In postgres, I either have to commit
after every record write, or guarantee the "cleanness" of the data before I
begin the import - which is sometimes difficult given the data sources I may be
importing from. Worse if the intention behind the automatic rollback is to
guarantee data purity (from the postgresql manual 10.4.1 -
" Turn off autocommit and just do one commit at the end. (In plain SQL, this
means issuing BEGIN at the start and COMMIT at the end. Some client libraries
may do this behind your back, in which case you need to make sure the library
does it when you want it done.) If you allow each insertion to be committed
separately, PostgreSQL is doing a lot of work for each record added. An
additional benefit of doing all insertions in one transaction is that if the
insertion of one record were to fail then the insertion of all records inserted
up to that point would be rolled back, so you won't be stuck with partially
loaded data."
 It actually makes more of a mess than leaving if or not to rollback under
application control. If in this example the three "bad" records are scattered
throughout  the dataset, I could end up (if I ignore the error as i would do
for most other dbms), with a random number of records in the table. etc etc.
Of course also,  to do robust imports of the insert ifError update style
avaiilable in most other (commercial) dbms
 it is also nescessary to issue a commit before each insert/update attempt.

Example2 - ODBC driver rollbacks :-

 Here is a snippet from a postgres log :-
2003-05-24 23:09:14 [1525]   LOG:  duration: 0.134503 sec
2003-05-24 23:09:14 [1525]   LOG:  query: select nspname from pg_namespace n, p
_class c where c.relnamespace=n.oid and c.oid='select'::regclass
2003-05-24 23:09:14 [1525]   ERROR:  Relation "select" does not exist
2003-05-24 23:09:14 [1525]   LOG:  statement: select nspname from pg_namespace
, pg_class c where c.relnamespace=n.oid and c.oid='select'::regclass
2003-05-24 23:09:14 [1525]   LOG:  query: ROLLBACK
2003-05-24 23:09:14 [1525]   LOG:  duration: 0.000538 sec

The rollback here is generated by the odbc driver to clear an error created by
the odbc driver incorrectly parsing the select from a subselect statement as a
table name. The application is totally unaware an error has occured and a
rollback has been issued. This is most likely to lead to data loss, unless,
basically, an application using odbc is in autocommit mode or commits after
every sql statement issued. This concerns me to the piont where I would really
recommend not using the parse statement option in the odbc driver (even though
that reduces the odbc drivers ability to mimic a prepare which is also
problematic :) unless you are using autocommit.

Thoughts and comments ?

Regards,
Wayne Armstorng
Bacchus Management Systems
http://www.bacchus.com.au


Re: Re-Create Table make Faster.

From
Robert Treat
Date:
Sounds like you need to read up on the vacuum command. Your table
probably had a lot of dead tuples which were eliminated by a
drop/recreate.

See:
http://www.postgresql.org/docs/view.php?version=7.3&idoc=1&file=routine-vacuuming.html
http://www.postgresql.org/docs/view.php?version=7.3&idoc=1&file=sql-vacuum.html


Robert Treat

On Sat, 2003-05-24 at 03:44, Yudha Setiawan wrote:
> Dear Milist,
>
> One day i got a problem with my table,
> It's have 35.000 records and 33 Fields.
> I tried to do Update a 30% records from
> it. And I also have an Match Index with
> my condition on Update. Before Updating
> the table, I did the Vacuum and Reindex.
> That table, But it's still taken a long
> time it's about 30 Minutes. I've tried
> Vacuum and Reindex many times for that.
> But it still take a long-long time. And
> Finaly in the middle of my confusing and
> disperate i tried to drop and recreate my
> table again. And my "Update" is walking
> so fast. But i still confusing of how
> could be like that...????, Somebody give
> me some explanation please.
>




Re: Re-Create Table make Faster.

From
"Yudha Setiawan"
Date:
Thank's for Dear Robert Treat for your response.

you're wrote;
> Sounds like you need to read up on the vacuum command. Your table
> probably had a lot of dead tuples which were eliminated by a
> drop/recreate.

Was the night-running do the vacuum and reindex using crontab not enough.
'till i had a lot of dead tuples.

GOD Bless You All.