Thread: Clustering using dblink
Dear Milist,
I Think this is the hard part for me,
I've been porting from Ms-SQL to PostgreSQL, and
All done succesfully, but the speed is still became
our apprehension right now. we have tried to add the
Hardisk(SCSI) with a controler for each. And tried to
spliting a view table or schema for helping I/O proccess.
It's helpfull just for a while. But it's getting
Slow again right now.
And the last one i had is just using dblink for spliting
my schema and table to another server. This's my part
of view's script...;
create view v_stock as
select * from
dblink('hostaddr=192.168.0.220
port=5432
dbname=d_inventory
user=inventory
password=',
'select * from t_stockreal')
as t1(
fc_branch character(6),
fc_stockcode character(20),
fn_lastdisc3 numeric(18,0),
ft_note text );
'select * from t_stockreal')
as t1(
fc_branch character(6),
fc_stockcode character(20),
fn_lastdisc3 numeric(18,0),
ft_note text );
But It's more slow. Please somebody help me Please.
Thank's and GOD Bless You All.
I do not think that using dblink can improve performance. Using dblink will get busy two servers instead of one, not to mention data that are send through network.
You need to specify more information about table (number of rows, indices, query plan),
and then someone might be able to help You.
In some(most) cases VACUUM ANALYZE improves performance.
----- Original Message -----
From: Yudha SetiawanSent: Tuesday, May 27, 2003 2:36 PMSubject: [GENERAL] Clustering using dblinkDear Milist,I Think this is the hard part for me,I've been porting from Ms-SQL to PostgreSQL, andAll done succesfully, but the speed is still becameour apprehension right now. we have tried to add theHardisk(SCSI) with a controler for each. And tried tospliting a view table or schema for helping I/O proccess.It's helpfull just for a while. But it's gettingSlow again right now.And the last one i had is just using dblink for splitingmy schema and table to another server. This's my partof view's script...;create view v_stock asselect * fromdblink('hostaddr=192.168.0.220port=5432dbname=d_inventoryuser=inventorypassword=',
'select * from t_stockreal')
as t1(
fc_branch character(6),
fc_stockcode character(20),
fn_lastdisc3 numeric(18,0),
ft_note text );But It's more slow. Please somebody help me Please.Thank's and GOD Bless You All.
On Tue, May 27, 2003 at 07:36:43PM +0700, Yudha Setiawan wrote: > Dear Milist, > I Think this is the hard part for me, > I've been porting from Ms-SQL to PostgreSQL, and > All done succesfully, but the speed is still became > our apprehension right now. we have tried to add the > Hardisk(SCSI) with a controler for each. And tried to > spliting a view table or schema for helping I/O proccess. > It's helpfull just for a while. But it's getting > Slow again right now. > And the last one i had is just using dblink for spliting > my schema and table to another server. This's my part > of view's script...; Umm, I don't know where you got the idea that putting it on another server would improve speed. Anyway, have you done all the usual things for performace (tune buffers and sort_mem)? Did you create the right indexes and did you run vacuum analyze? Please post an example query else we cannot help you. The EXPLAIN ANALYZE output would be most useful. -- Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/ > "the West won the world not by the superiority of its ideas or values or > religion but rather by its superiority in applying organized violence. > Westerners often forget this fact, non-Westerners never do." > - Samuel P. Huntington
Attachment
You should specify more info about your trouble. On Вторник, 27 Май 2003 19:36, Yudha Setiawan wrote: > Dear Milist, > I Think this is the hard part for me, > I've been porting from Ms-SQL to PostgreSQL, and > All done succesfully, but the speed is still became > our apprehension right now. we have tried to add the > Hardisk(SCSI) with a controler for each. And tried to > spliting a view table or schema for helping I/O proccess. > It's helpfull just for a while. But it's getting > Slow again right now. > And the last one i had is just using dblink for spliting > my schema and table to another server. This's my part > of view's script...; > > create view v_stock as > select * from > dblink('hostaddr=192.168.0.220 > port=5432 > dbname=d_inventory > user=inventory > password=', > 'select * from t_stockreal') > as t1( > fc_branch character(6), > fc_stockcode character(20), > fn_lastdisc3 numeric(18,0), > ft_note text ); > > But It's more slow. Please somebody help me Please. > > Thank's and GOD Bless You All.
On Wed, May 28, 2003 at 01:19:18PM +0700, Yudha Setiawan wrote: > Thank's for Dear Martijn van Oosterhout for your interesting > to help me. Hi. In the future when you post to a mailing list, please respond to the mailing list. > >> Anyway, have you done all the usual things for performace (tune buffers > and > >> sort_mem)? > * Here it is my conf. (Please open my-Attachment) No problem there. > >> Umm, I don't know where you got the idea that putting it > >> on another server would improve speed. > * I've been thinking that split the Table or Schema to the another > * Hardisk with a controler for each will increase the speed, we also > * put some table or schema that's we use to join to different side. And > * we've done it. It was increase the speed but just in a month. > * my data is growing so fast, it's getting slow again, finaly i tried to split > * the data to the another Server using dblink. I put the original data,trigger, > * function and sequence on Server-2(Back-Server) and I make a View > * table and its rule on Server-1(Front-Server) it's also good for my front-end > * because my-application is just shooting IP on Server-1(Front-Server). > * I've been using Giga-byte LAN-Card from Server-1 to Server-2 to pay for > * View table ( i know view table will taking much resource). I don't know exactly how much output to expect. According to your output below, you don't get any output at all? > 3. here it is the way we call / execute my function. > select * from d_master.pr_onhand_warehouse_standar('2003/02/01','CSATGR','AA','AA','AG','','',''); > explain analyze > explain analyze select * from d_master.pr_onhand_gudang_standar('2003/02/01','CSATGR','AA','AA','AG','','','') > QUERY PLAN > -------------------------------------------------- > Function Scan on pr_onhand_gudang_standar > (cost=0.00..12.50 rows=1000 width=398) (actual time=439697.19..439697.19 rows=0 loops=1) > Total runtime: 439697.23 msec > -------------------------------------------------- > (2 row(s) affected) Hmm, it's not going into the function itself. Can you run EXPLAIN ANALYZE on the query as a single statement instead of as a function, so the explain analyze shows the actual structure of the query. Somewhere there is going to be a Seq Scan of a large table that's probably missing an index. Incidentaly, your schema is an excellent example why artificial primary keys can be a good thing. -- Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/ > "the West won the world not by the superiority of its ideas or values or > religion but rather by its superiority in applying organized violence. > Westerners often forget this fact, non-Westerners never do." > - Samuel P. Huntington