Thread: [PERFORM] printing results of query to file in different times

[PERFORM] printing results of query to file in different times

From
Mariel Cherkassky
Date:

I'm trying to understand what postgresql doing in an issue that I'm having. Our app team wrote a function that runs with a cursor over the results of a query and via the utl_file func they write some columns to a file. I dont understand why, but postgresql write the data into the file in the fs in parts. I mean that it runs the query and it takes time to get back results and when I see that the results back postgresql write to file the data and then suddenly stops for X minutes. After those x minutes it starts again to write the data and it continues that way until its done. The query returns total 100 rows. I want to understand why it stops suddenly. There arent any locks in the database during this operation.

my function looks like that : 

func(a,b,c...)

cursor cr for

select ab,c,d,e.....

begin

raise notice - 'starting loop time - %',timeofday();

 for cr_record in cr

    Raise notice 'print to file - '%',timeofday();

    utl_file.write(file,cr_record)

 end loop

end

I see the log of the running the next output : 

starting loop 16:00

print to file : 16:03

print to file : 16:03

print to file : 16:07

print to file : 16:07

print to file : 16:07

print to file : 16:010

......



Can somebody explain to me this kind of behavior ? Why is it taking some much time to write and in different minutes after the query already been executed and finished ? Mybe I'm getting from the cursor only part of the rows ?


Re: [PERFORM] printing results of query to file in different times

From
Mariel Cherkassky
Date:
Anyone?

2017-08-24 16:15 GMT+03:00 Mariel Cherkassky <mariel.cherkassky@gmail.com>:

I'm trying to understand what postgresql doing in an issue that I'm having. Our app team wrote a function that runs with a cursor over the results of a query and via the utl_file func they write some columns to a file. I dont understand why, but postgresql write the data into the file in the fs in parts. I mean that it runs the query and it takes time to get back results and when I see that the results back postgresql write to file the data and then suddenly stops for X minutes. After those x minutes it starts again to write the data and it continues that way until its done. The query returns total 100 rows. I want to understand why it stops suddenly. There arent any locks in the database during this operation.

my function looks like that : 

func(a,b,c...)

cursor cr for

select ab,c,d,e.....

begin

raise notice - 'starting loop time - %',timeofday();

 for cr_record in cr

    Raise notice 'print to file - '%',timeofday();

    utl_file.write(file,cr_record)

 end loop

end

I see the log of the running the next output : 

starting loop 16:00

print to file : 16:03

print to file : 16:03

print to file : 16:07

print to file : 16:07

print to file : 16:07

print to file : 16:010

......



Can somebody explain to me this kind of behavior ? Why is it taking some much time to write and in different minutes after the query already been executed and finished ? Mybe I'm getting from the cursor only part of the rows ?


Re: [PERFORM] printing results of query to file in different times

From
Roman Konoval
Date:
Can you show explain with analyze and buffers options for your query?

Regards,
Roman Konoval



On Aug 24, 2017, at 15:15, Mariel Cherkassky <mariel.cherkassky@gmail.com> wrote:

I'm trying to understand what postgresql doing in an issue that I'm having. Our app team wrote a function that runs with a cursor over the results of a query and via the utl_file func they write some columns to a file. I dont understand why, but postgresql write the data into the file in the fs in parts. I mean that it runs the query and it takes time to get back results and when I see that the results back postgresql write to file the data and then suddenly stops for X minutes. After those x minutes it starts again to write the data and it continues that way until its done. The query returns total 100 rows. I want to understand why it stops suddenly. There arent any locks in the database during this operation.

my function looks like that : 

func(a,b,c...)

cursor cr for

select ab,c,d,e.....

begin

raise notice - 'starting loop time - %',timeofday();

 for cr_record in cr

    Raise notice 'print to file - '%',timeofday();

    utl_file.write(file,cr_record)

 end loop

end

I see the log of the running the next output : 

starting loop 16:00

print to file : 16:03

print to file : 16:03

print to file : 16:07

print to file : 16:07

print to file : 16:07

print to file : 16:010

......



Can somebody explain to me this kind of behavior ? Why is it taking some much time to write and in different minutes after the query already been executed and finished ? Mybe I'm getting from the cursor only part of the rows ?



Re: [PERFORM] printing results of query to file in different times

From
George Neuner
Date:
On Thu, 24 Aug 2017 16:15:19 +0300, Mariel Cherkassky
<mariel.cherkassky@gmail.com> wrote:

 >I'm trying to understand what postgresql doing in an issue that I'm
having.
 >Our app team wrote a function that runs with a cursor over the results
of a
 >query and via the utl_file func they write some columns to a file. I dont
 >understand why, but postgresql write the data into the file in the fs in
 >parts. I mean that it runs the query and it takes time to get back results
 >and when I see that the results back postgresql write to file the data and
 >then suddenly stops for X minutes. After those x minutes it starts
again to
 >write the data and it continues that way until its done. The query returns
 >total *100* rows. I want to understand why it stops suddenly. There arent
 >any locks in the database during this operation.
 >
 >my function looks like that :
 >
 >func(a,b,c...)
 >cursor cr for
 >select ab,c,d,e.....
 >begin
 >raise notice - 'starting loop time - %',timeofday();
 > for cr_record in cr
 >    Raise notice 'print to file - '%',timeofday();
 >    utl_file.write(file,cr_record)
 > end loop
 >end
 >
 >I see the log of the running the next output :
 >
 >starting loop 16:00
 >print to file : 16:03
 >print to file : 16:03
 >print to file : 16:07
 >print to file : 16:07
 >print to file : 16:07
 >print to file : 16:010
 >
 >......
 >
 >Can somebody explain to me this kind of behavior ? Why is it taking some
 >much time to write and in different minutes after the query already been
 >executed and finished ? Mybe I'm getting from the cursor only part of the
 >rows ?


First I'd ask where did you get  utl_file  from?  Postrgesql has no such
facility, so you must be using an extension.  And not one I'm familiar
with either -  EnterpriseDB has a utl_file implementation in their
Oracle compatibility stuff, but it uses "get" and "put" calls rather
than "read" and "write".

Second, raising notices can be slow - I assume you added them to see
what was happening?  How does the execution time compare if you remove them?

I saw someone else asked about the execution plan, but I'm not sure that
will help here because it would affect only the initial select ... the
cursor would be working with the result set and should be able to skip
directly to the target rows.  I might expect several seconds for the
loop with I/O ... but certainly not minutes unless the server is
severely overloaded.

One thing you might look at is the isolation level of the query.  If you
are using READ_COMMITTED or less, and the table is busy, other writing
queries may be stepping on yours and even potentially changing your
result set during the cursor loop.  I would try using REPEATABLE_READ
and see what happens.


George


Re: [PERFORM] printing results of query to file in different times

From
Mariel Cherkassky
Date:
I'm using an extension that is called orafce. 
Yes I add the raise notice in order to see what happening but it doesnt work faster. The execution plan isnt relevant because It happens for many queries not for a specific one. I didnt understand what do you mean by REPEATABLE_READ .

2017-08-31 16:24 GMT+03:00 George Neuner <gneuner2@comcast.net>:
On Thu, 24 Aug 2017 16:15:19 +0300, Mariel Cherkassky <mariel.cherkassky@gmail.com> wrote:

>I'm trying to understand what postgresql doing in an issue that I'm having.
>Our app team wrote a function that runs with a cursor over the results of a
>query and via the utl_file func they write some columns to a file. I dont
>understand why, but postgresql write the data into the file in the fs in
>parts. I mean that it runs the query and it takes time to get back results
>and when I see that the results back postgresql write to file the data and
>then suddenly stops for X minutes. After those x minutes it starts again to
>write the data and it continues that way until its done. The query returns
>total *100* rows. I want to understand why it stops suddenly. There arent

>any locks in the database during this operation.
>
>my function looks like that :
>
>func(a,b,c...)
>cursor cr for
>select ab,c,d,e.....
>begin
>raise notice - 'starting loop time - %',timeofday();
> for cr_record in cr
>    Raise notice 'print to file - '%',timeofday();
>    utl_file.write(file,cr_record)
> end loop
>end
>
>I see the log of the running the next output :
>
>starting loop 16:00
>print to file : 16:03
>print to file : 16:03
>print to file : 16:07
>print to file : 16:07
>print to file : 16:07
>print to file : 16:010
>
>......
>
>Can somebody explain to me this kind of behavior ? Why is it taking some
>much time to write and in different minutes after the query already been
>executed and finished ? Mybe I'm getting from the cursor only part of the
>rows ?


First I'd ask where did you get  utl_file  from?  Postrgesql has no such facility, so you must be using an extension.  And not one I'm familiar with either -  EnterpriseDB has a utl_file implementation in their Oracle compatibility stuff, but it uses "get" and "put" calls rather than "read" and "write".

Second, raising notices can be slow - I assume you added them to see what was happening?  How does the execution time compare if you remove them?

I saw someone else asked about the execution plan, but I'm not sure that will help here because it would affect only the initial select ... the cursor would be working with the result set and should be able to skip directly to the target rows.  I might expect several seconds for the loop with I/O ... but certainly not minutes unless the server is severely overloaded.

One thing you might look at is the isolation level of the query.  If you are using READ_COMMITTED or less, and the table is busy, other writing queries may be stepping on yours and even potentially changing your result set during the cursor loop.  I would try using REPEATABLE_READ and see what happens.


George


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [PERFORM] printing results of query to file in different times

From
George Neuner
Date:
Hi Mariel,

Please don't top post in the Postgresql groups.

On 9/5/2017 7:28 AM, Mariel Cherkassky wrote:
> 2017-08-31 16:24 GMT+03:00 George Neuner <gneuner2@comcast.net > <mailto:gneuner2@comcast.net>>: > >> One thing you might look at is the isolation level of the query. >> If you are using READ_COMMITTED or less, and the table is busy, >> other writing queries may be stepping on yours and even >> potentially changing your result set during the cursor loop. I >> would try using REPEATABLE_READ and see what happens. > > I didn't understand what do you mean by REPEATABLE_READ .
I was referring to transaction isolation levels.  When multiple transactions are running concurrently, the DBMS can (or not) prevent them from seeing changes made by one another.  Consider 2 transactions A and B running concurrently:

  T1:   A reads table X
  T2:   B writes to table X
  T3:   B commits
  T4:   A reads table X again.

Depending on the isolation levels [and the specific query, obviously], A may or may not be able to see what changes B made to X.

The default isolation level in Postgresql is READ COMMITTED, which does allow transactions to see committed writes made by concurrently running transactions.  REPEATABLE READ is a higher level of isolation which effectively takes a snapshot of the table(s) when they are 1st read, and guarantees that any further reads (e.g., by cursors) of the tables made by the transaction continue to see the same results.


My thought was that your loop may be running slowly because the table is being changed underneath your cursor.  It may be better to pull the results into a temporary table and run your cursor loop over that.

For more information, see:
https://www.postgresql.org/docs/current/static/transaction-iso.html
https://www.postgresql.org/docs/9.6/static/sql-begin.html
https://stackoverflow.com/questions/6274457/set-isolation-level-for-postgresql-stored-procedures

George

Re: [PERFORM] printing results of query to file in different times

From
George Neuner
Date:
Geez ... I just saw how my last message got mangled.
Trying again.


On 9/5/2017 7:28 AM, Mariel Cherkassky wrote:
> I didn't understand what do you mean by REPEATABLE_READ.

I was referring to transaction isolation levels.  When multiple
transactions are running concurrently, the DBMS can (or not) prevent
them from seeing changes made by one another.  Consider 2 transactions A
and B running concurrently:

   T1:   A reads table X
   T2:   B writes to table X
   T3:   B commits
   T4:   A reads table X again.

Depending on the isolation levels [and the specific query, obviously], A
may or may not be able to see what changes B made to X.

The default isolation level in Postgresql is READ COMMITTED, which does
allow transactions to see committed writes made by concurrently running
transactions.  REPEATABLE READ is a higher level of isolation which
effectively takes a snapshot of the table(s) when they are 1st read, and
guarantees that any further reads (e.g., by cursors) of the tables made
by the transaction continue to see the same results.


My thought was that your loop may be running slowly because the table is
being changed underneath your cursor.  It may be better to pull the
results into a temporary table and run your cursor loop over that.

For more information, see:
https://www.postgresql.org/docs/current/static/transaction-iso.html
https://www.postgresql.org/docs/9.6/static/sql-begin.html
https://stackoverflow.com/questions/6274457/set-isolation-level-for-postgresql-stored-procedures


George


Re: [PERFORM] printing results of query to file in different times

From
George Neuner
Date:
Hi Mariel,

On 9/6/2017 4:38 AM, Mariel Cherkassky wrote:
I'm sure that those tables arent involved in any other transaction when the loop is running. Anything else that I can check ? I think that mybe its connected to some fetching properties but Im not familiar with what settings..

That's the problem.  There are a lot of things that can affect query performance, but most of them won't affect an open cursor unless isolation is low and the query's source tables are changing due to ongoing operations.  Each time the cursor is accessed, the query's source tables are checked for modifications, and if they have been changed, the cursor's query is re-executed ... potentially changing the result set.

Not that it matters here, but you didn't show your actual query.  Even if you are only fetching 100 rows, the query may be doing a lot of work (joins, sorts, etc.) to identify those rows.  If a complicated query is being executed over and over due to ongoing table modifications ...    That's why I suggested using a temporary table that you know won't be modified while the cursor is open on it - it's a way of side-stepping isolation issues that are beyond your control.

If there really is no contention for the source tables, the only other possibilities are a badly over-loaded (or mis-configured) server, a problem with the storage system (e.g., a bad disk that is causing hiccups rather than outright failures), or some unknown issue with the extension you are using.

I'm afraid I'm out of suggestions.
George

Re: [PERFORM] printing results of query to file in different times

From
George Neuner
Date:
Hi Mariel,

On 9/7/2017 7:02 AM, Mariel Cherkassky wrote:
> I'm pretty sure that the source tables are changing during the loop. I
> have no problem showing the query :
>
>        SELECT AREA,
>                PHONE,
>                TERM_CODE,
>                LINE_NO,
>                PAYMENT_START_DATE,
>                PAYMENT_END_DATE
>         FROM   TC_FINTERMS A
>         WHERE  AREA IN ('32', '75', '31', '35') -- Iditb 04/12/2011
>         AND    TERM_TYPE = '1'
>         AND    TERM_CODE NOT IN (15,255,180,182)
>         AND (PAYMENT_END_DATE IS NULL OR (PAYMENT_END_DATE >
> current_timestamp AND
>               date_trunc('day', PAYMENT_END_DATE) <> date_trunc('day',
> PAYMENT_START_DATE)))
>         AND    not exists(SELECT 3
>                 FROM   BL_NLFR
>                 WHERE  AREA IN ('75', '35') -- Iditb 04/12/2011
>                 and    phone = a.phone)
>         AND    NOT EXISTS(SELECT 2
>                 FROM   TV_FINTERM
>                 WHERE  TERM_CODE = A.TERM_CODE
>                 AND (TERM_TYPE_CODE = 556 or term_type_ss = 1::varchar))
>         AND    NOT EXISTS(SELECT 1
>                 FROM   NAP_IP_DEBIT    B,
>                        PS_RF_INST_PROD C
>                 WHERE  B.INST_PROD_ID = C.INST_PROD_ID
>                 AND    C.SETID = 'SHARE'
>                 AND    C.NAP_MAKAT_CD <> 7
>                 AND    NAP_AREA2 = A.AREA
>                 AND    NAP_PHONE_NUM = A.PHONE
>                 AND (B.NAP_FINTERM_END_DT IS NULL OR
>                       B.NAP_FINTERM_END_DT > current_timestamp)
>                 AND    NAP_BILLING_CATNUM = A.TERM_CODE
>                 AND    A.LINE_NO = B.NAP_FINTERMS_LINE);
>
>
> expalin query :

I don't understand the point of the not-exists sub-selects ... they
return a single value (1, 2, or 3) or nothing depending - but then that
value isn't compared to anything in TC_FINTERMS.   If you were intending
to "select top <n>" then you should know Postgresql doesn't support the
"top" keyword ... you need to use "limit" [and sort first in the order
matters].

As far as I can see, those sub-selects are just wasting time without
affecting the results at all.


> -------------------------------
>  Nested Loop Anti Join  (cost=67766.53..1008863.68 rows=1 width=38)
>    ->  Nested Loop Anti Join  (cost=67766.25..1008853.87 rows=1 width=38)
>          ->  Hash Anti Join  (cost=67765.12..1008805.25 rows=1 width=38)
>                Hash Cond: (a.term_code = tv_finterm.term_code)
>                ->  Bitmap Heap Scan on tc_finterms a
>  (cost=48268.39..843129.37 rows=1647089 width=38)
>                      Recheck Cond: ((((term_type)::text = '1'::text)
> AND (payment_end_date IS NULL)) OR (((term_type)::text = '1'::
> text) AND (payment_end_date > now())))
>                      Filter: (((area)::text = ANY
> ('{32,75,31,35}'::text[])) AND (term_code <> ALL
> ('{15,255,180,182}'::integer[]))
>  AND ((payment_end_date IS NULL) OR ((payment_end_date > now()) AND
> (date_trunc('day'::text, payment_end_date) <> date_trunc('day':
> :text, payment_start_date)))))
>                      ->  BitmapOr  (cost=48268.39..48268.39
> rows=1867571 width=0)
>                            ->  Bitmap Index Scan on mariel_tc_finterms
>  (cost=0.00..32332.45 rows=1272789 width=0)
>                                  Index Cond: (((term_type)::text =
> '1'::text) AND (payment_end_date IS NULL))
>                            ->  Bitmap Index Scan on mariel_tc_finterms
>  (cost=0.00..15112.39 rows=594782 width=0)
>                                  Index Cond: (((term_type)::text =
> '1'::text) AND (payment_end_date > now()))
>                ->  Hash  (cost=18808.47..18808.47 rows=55061 width=4)
>                      ->  Seq Scan on tv_finterm  (cost=0.00..18808.47
> rows=55061 width=4)
>                            Filter: ((term_type_code = 556) OR
> ((term_type_ss)::text = '1'::text))
>          ->  Nested Loop  (cost=1.13..24.87 rows=1 width=23)
>                ->  Index Scan using ps_rf_inst_prod_comb1 on
> ps_rf_inst_prod c  (cost=0.56..12.23 rows=2 width=17)
>                      Index Cond: (((nap_phone_num)::text =
> (a.phone)::text) AND ((nap_area2)::text = (a.area)::text))
>                      Filter: ((nap_makat_cd <> '7'::numeric) AND
> ((setid)::text = 'SHARE'::text))
>                ->  Index Only Scan using mariel_nap on nap_ip_debit b
>  (cost=0.56..6.31 rows=1 width=22)
>                      Index Cond: ((inst_prod_id =
> (c.inst_prod_id)::text) AND (nap_billing_catnum = (a.term_code)::numeric))
>                      Filter: (((nap_finterm_end_dt IS NULL) OR
> (nap_finterm_end_dt > now())) AND (a.line_no = (nap_finterms_line)::
> double precision))
>    ->  Index Only Scan using bl_nlfr_ix1 on bl_nlfr  (cost=0.28..18.40
> rows=5 width=7)
>          Index Cond: ((area = ANY ('{75,35}'::text[])) AND (phone =
> (a.phone)::text))
> -------------------------------


Judging from the explaination [and modulo those odd sub-selects], this
should be reasonably quick as a normal batch query ... but according to
the estimates, some of those index scans are on 500K - 1M rows, which is
not something you want to be repeating many times under an open cursor.


> How can I use temporary tables ?

First run your complex query and place the results in a temp table. Then
run your cursor over the temp table.  And finally, drop the temp table
because, if you want to run the function again, "select into" will fail
if the target table already exists.

     select  <result columns>
          into temp table <tablename>
          <the complex query>

     open <cursor> for
         select * from <tablename>
     :
     close <cursor>

     drop <tablename>


see https://www.postgresql.org/docs/current/static/sql-selectinto.html

The temp table won't change under the open cursor, and so there won't be
any isolation issues.  If the performance is *still* bad after doing
this, then it's a server or extension issue.

George