Re: cache problem (v2) - Mailing list pgsql-admin
From | De Leeuw Guy |
---|---|
Subject | Re: cache problem (v2) |
Date | |
Msg-id | 469D0E39.7040805@eurofer.be Whole thread Raw |
In response to | Re: cache problem (v2) (Andrew Sullivan <ajs@crankycanuck.ca>) |
Responses |
Re: cache problem (v2)
|
List | pgsql-admin |
> Yes, sorry, I phrased that wrong. Let me put it differently: your > trigger runs only inside the transaction of the calling statement, > unless that statement itself is inside a longer explicitly-called > transaction. For example: > > t1 t2 > > BEGIN > UPDATE table_with_trigger > SELECT something SELECT ...FROM trigger_effect > INSERT something else > COMMIT > > In this case, t2 does _not_ see the effects of the trigger in t1, > because those effects are not visible until the COMMIT. But > > t1 t2 > > UPDATE table_with_trigger > SELECT something SELECT...FROM trigger_effect > INSERT something else > > in this case, t2 _does_ see the effects, because the trigger's > effects are COMMITted implicitly after the UPDATE statement. > Yes, that I understand, but it's not my case I have : table test ( int code, int qte); t1 INSERT test values(1, 150) call my trigger that SELECT WHERE code=3 (does not exist) and INSERT INTO test (code=3, qte=150) INSERT test values(2, 450) call my trigger that SELECT WHERE code = 3 (exist) and UPDATE test (code=3, qte=600) .... Ok all work fine Now I have a flat file : 1,150 2,450 COPY .... path_to_this_flat_file code=3,qte=450 Why ? Another error also is "duplicate key" It's like that, into my trigger, the second SELECT return 0 rows (SPI_processed = 0) and in this case the trigger try to INSERT instead of UPDATE I loose one week with this problem. > If I read that right, you admit that you are inexperienced with the > concepts and the software, and you are unable to show us all the > relevant code or send us a precise description of what you are > doing; but, you are convinced nevertheless that the problem is a bug > or deficiency in PostgreSQL that nobody else seems to be having, and > not a problem with your approach? I suggest you think again. > I read the documentation from postgres not all. But sufficiently to start a test of a trigger. but sure, I'm not a veteran :-) I never say that it's a bug, I say that with COPY the trigger does not work like with INSERT. The event received by the trigger are the same no ? And it's what I try to understand why the comportment of my trigger change when the call come from COPY or from a lot of INSERT it's all. You give me a way with the COMMITTED READ. I try to explain more : (tomorrow I try to put on our web site a full example and data) I work on a statics project, before this project are builded in 1990 with CIsam from informix. In 2004 I migrate the development from CIsam to berkeleyDB. Now to allow more possibilities for our users (like connection from OOo base, calc and so on) I try to migrate the model to postgres The problem is that I receive about 4 millions of data by month. and to speed up the major type of query called by our users I build a sum of different items. example : Origins (company a, company b,....) I build a "Total Eurofer" Market (France, Belgium, ....) I build a "Total All Markets" Products (product a, product b) total product a+b In real world : for the Import data : 30 Origins, 249 Markets, 1472 products, 360 periods (like 1999-01) The sum are about 3 Origins, 28 Markets, 158 products This is the job of my trigger : build the sum code to speed up the standard query of our users. for each origins insert/update the base code check if this code update a sum code if yes insert/update the sum code for each market insert/update the base code check if this code update a sum code if yes do it ... finally from about 4 millions of input data I output about 16 millions of records Sorry if I disturb you. Regards Guy
pgsql-admin by date: