column size too large, is this a bug? - Mailing list pgsql-performance

From Qing Zhao
Subject column size too large, is this a bug?
Date
Msg-id 082D0538-7E86-11D8-8B9C-000A95AB8896@quotefx.net
Whole thread Raw
Responses Re: column size too large, is this a bug?  (Tom Lane <tgl@sss.pgh.pa.us>)
Re: column size too large, is this a bug?  (Stephan Szabo <sszabo@megazone.bigpanda.com>)
List pgsql-performance
 I have a query which get's data from a single table.

When I try to get data from for an RFQ which has around 5000 rows, it
is breaking off at 18th row.

If i reduce some columns , then it returns all the rows and not so
slow.

 I have tried with different sets of column and there is no pattern
based on columns.


 But one thing is sure one size of the rows grows more than some
bytes, the records do not get returned. Now the following query
returns me all 5001 rows to me pretty fast



<color><param>2676,2346,FFFD</param>  select

  _level_ as l,

   nextval('seq_pk_bom_detail') as bom_detail,

   prior nextval('seq_pk_bom_detail') as parent_subassembly,

  parent_part_number,

   customer_part_number,

  /* mfr_name,

   mfr_part,

   description,*/

   commodity,

   needs_date,

   target_price,

   comments,

   case qty_per

      when null then 0.00001

      when 0 then 0.00001

      else qty_per

   end,

   qty_multiplier1,

   qty_multiplier2,

   qty_multiplier3,

   qty_multiplier4,

   qty_multiplier5

 from bom_detail_work_clean

 where (0=0)

  and bom_header=20252

  and file_number = 1

  start with customer_part_number = 'Top Assembly 1'

  connect by parent_part_number = prior customer_part_number;

</color>


But if I uncomment the description then it returns me only 18 rows.


 <color><param>FFFD,2231,314B</param> select

  _level_ as l,

   nextval('seq_pk_bom_detail') as bom_detail,

   prior nextval('seq_pk_bom_detail') as parent_subassembly,

  parent_part_number,

   customer_part_number,

  /* mfr_name,

   mfr_part,*/

   description,

   commodity,

   needs_date,

   target_price,

   comments,

   case qty_per

      when null then 0.00001

      when 0 then 0.00001

      else qty_per

   end,

   qty_multiplier1,

   qty_multiplier2,

   qty_multiplier3,

   qty_multiplier4,

   qty_multiplier5

 from bom_detail_work_clean

 where (0=0)

  and bom_header=20252

  and file_number = 1

  start with customer_part_number = 'Top Assembly 1'

  connect by parent_part_number = prior customer_part_number;


</color><color><param>0126,0126,0126</param>Now these 18 rows are
level 2 records  in heirarchical query. I have a feeling the server
has some memory paging mechanism

and if it can not handle beyond certain byets, it just returns what it
has.

 During your investigation of optimization of postgreSQL did you come
across any setting that might help us ?


Thanks!


Qing


</color>PS: I just reload the file while reducing the content in the
description column.

The file got uploaded. So looks like the problem is size of the record
being inserted.

<color><param>0126,0126,0126</param>

</color>
  I have a query which get's data from a single table.
When I try to get data from for an RFQ which has around 5000 rows, it
is breaking off at 18th row.
If i reduce some columns , then it returns all the rows and not so slow.
  I have tried with different sets of column and there is no pattern
based on columns.

  But one thing is sure one size of the rows grows more than some bytes,
the records do not get returned. Now the following query returns me all
5001 rows to me pretty fast


   select
   _level_ as l,
    nextval('seq_pk_bom_detail') as bom_detail,
    prior nextval('seq_pk_bom_detail') as parent_subassembly,
   parent_part_number,
    customer_part_number,
   /* mfr_name,
    mfr_part,
    description,*/
    commodity,
    needs_date,
    target_price,
    comments,
    case qty_per
       when null then 0.00001
       when 0 then 0.00001
       else qty_per
    end,
    qty_multiplier1,
    qty_multiplier2,
    qty_multiplier3,
    qty_multiplier4,
    qty_multiplier5
  from bom_detail_work_clean
  where (0=0)
   and bom_header=20252
   and file_number = 1
   start with customer_part_number = 'Top Assembly 1'
   connect by parent_part_number = prior customer_part_number;


But if I uncomment the description then it returns me only 18 rows.

   select
   _level_ as l,
    nextval('seq_pk_bom_detail') as bom_detail,
    prior nextval('seq_pk_bom_detail') as parent_subassembly,
   parent_part_number,
    customer_part_number,
   /* mfr_name,
    mfr_part,*/
    description,
    commodity,
    needs_date,
    target_price,
    comments,
    case qty_per
       when null then 0.00001
       when 0 then 0.00001
       else qty_per
    end,
    qty_multiplier1,
    qty_multiplier2,
    qty_multiplier3,
    qty_multiplier4,
    qty_multiplier5
  from bom_detail_work_clean
  where (0=0)
   and bom_header=20252
   and file_number = 1
   start with customer_part_number = 'Top Assembly 1'
   connect by parent_part_number = prior customer_part_number;

Now these 18 rows are level 2 records  in heirarchical query. I have a
feeling the server has some memory paging mechanism
and if it can not handle beyond certain byets, it just returns what it
has.
  During your investigation of optimization of postgreSQL did you come
across any setting that might help us ?

Thanks!

Qing

PS: I just reload the file while reducing the content in the
description column.
The file got uploaded. So looks like the problem is size of the record
being inserted.


pgsql-performance by date:

Previous
From: markw@osdl.org
Date:
Subject: Re: [HACKERS] fsync method checking
Next
From: Tom Lane
Date:
Subject: Re: column size too large, is this a bug?