Thread: out of memory

out of memory

From
martial.bizel@free.fr
Date:
Hello,

I've error "out of memory" with these traces :


TopMemoryContext: 32768 total in 3 blocks; 5152 free (1 chunks); 27616 used
TopTransactionContext: 8192 total in 1 blocks; 8136 free (0 chunks); 56 used
DeferredTriggerXact: 0 total in 0 blocks; 0 free (0 chunks); 0 used
MessageContext: 24576 total in 2 blocks; 2688 free (14 chunks); 21888 used
PortalMemory: 8192 total in 1 blocks; 8040 free (0 chunks); 152 used
PortalHeapMemory: 8192 total in 1 blocks; 3936 free (0 chunks); 4256 used
PortalHeapMemory: 23552 total in 5 blocks; 1160 free (4 chunks); 22392 used
ExecutorState: 8192 total in 1 blocks; 3280 free (4 chunks); 4912 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExecutorState: 24576 total in 2 blocks; 11264 free (14 chunks); 13312 used
ExprContext: 8192 total in 1 blocks; 8128 free (0 chunks); 64 used
AggContext: -1976573952 total in 287 blocks; 25024 free (414 chunks);
-1976598976 used
DynaHashTable: 503439384 total in 70 blocks; 6804760 free (257 chunks);
496634624 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
CacheMemoryContext: 516096 total in 6 blocks; 126648 free (2 chunks); 389448
used
test_query: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
test_date: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
query_string_query_string_key: 1024 total in 1 blocks; 640 free (0 chunks); 384
used
query_string_pkey: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_index_indrelid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_attrdef_adrelid_adnum_index: 1024 total in 1 blocks; 320 free (0 chunks); 704
used
pg_amop_opc_strategy_index: 1024 total in 1 blocks; 320 free (0 chunks); 704
used
pg_shadow_usename_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_amop_opr_opc_index: 1024 total in 1 blocks; 320 free (0 chunks); 704 used
pg_conversion_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_language_name_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_statistic_relid_att_index: 1024 total in 1 blocks; 320 free (0 chunks); 704
used
pg_attribute_relid_attnam_index: 1024 total in 1 blocks; 320 free (0 chunks);
704 used
pg_shadow_usesysid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_cast_source_target_index: 1024 total in 1 blocks; 320 free (0 chunks); 704
used
pg_conversion_name_nsp_index: 1024 total in 1 blocks; 320 free (0 chunks); 704
used
pg_trigger_tgrelid_tgname_index: 1024 total in 1 blocks; 320 free (0 chunks);
704 used
pg_namespace_nspname_index: 1024 total in 1 blocks; 640 free (0 chunks); 384
used
pg_conversion_default_index: 2048 total in 1 blocks; 704 free (0 chunks); 1344
used
pg_class_relname_nsp_index: 1024 total in 1 blocks; 320 free (0 chunks); 704
used
pg_aggregate_fnoid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_inherits_relid_seqno_index: 1024 total in 1 blocks; 320 free (0 chunks); 704
used
pg_language_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_type_typname_nsp_index: 1024 total in 1 blocks; 320 free (0 chunks); 704 used
pg_group_sysid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_namespace_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_proc_proname_args_nsp_index: 2048 total in 1 blocks; 704 free (0 chunks);
1344 used
pg_opclass_am_name_nsp_index: 2048 total in 1 blocks; 768 free (0 chunks); 1280
used
pg_group_name_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_proc_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_operator_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_amproc_opc_procnum_index: 1024 total in 1 blocks; 320 free (0 chunks); 704
used
pg_index_indexrelid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_operator_oprname_l_r_n_index: 2048 total in 1 blocks; 704 free (0 chunks);
1344 used
pg_opclass_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_rewrite_rel_rulename_index: 1024 total in 1 blocks; 320 free (0 chunks); 704
used
pg_type_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_attribute_relid_attnum_index: 1024 total in 1 blocks; 320 free (0 chunks);
704 used
pg_class_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
MdSmgr: 8192 total in 1 blocks; 5712 free (0 chunks); 2480 used
DynaHash: 8192 total in 1 blocks; 6912 free (0 chunks); 1280 used
DynaHashTable: 8192 total in 1 blocks; 2008 free (0 chunks); 6184 used
DynaHashTable: 8192 total in 1 blocks; 5080 free (0 chunks); 3112 used
DynaHashTable: 8192 total in 1 blocks; 2008 free (0 chunks); 6184 used
DynaHashTable: 8192 total in 1 blocks; 3016 free (0 chunks); 5176 used
DynaHashTable: 8192 total in 1 blocks; 4040 free (0 chunks); 4152 used
DynaHashTable: 24576 total in 2 blocks; 13240 free (4 chunks); 11336 used
DynaHashTable: 0 total in 0 blocks; 0 free (0 chunks); 0 used
DynaHashTable: 0 total in 0 blocks; 0 free (0 chunks); 0 used
DynaHashTable: 0 total in 0 blocks; 0 free (0 chunks); 0 used
DynaHashTable: 0 total in 0 blocks; 0 free (0 chunks); 0 used
DynaHashTable: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ErrorContext: 8192 total in 1 blocks; 8176 free (6 chunks); 16 used
2006-02-14 16:06:14 [25816] ERROR:  out of memory
DETAIL:  Failed on request of size 88.
ERROR:  out of memory
DETAIL:  Failed on request of size 88.


Anybody could help me ??

Thanks a lot

MB


Re: out of memory

From
Tom Lane
Date:
martial.bizel@free.fr writes:
> I've error "out of memory" with these traces :

Doing what?

> AggContext: -1976573952 total in 287 blocks; 25024 free (414 chunks);
> -1976598976 used
> DynaHashTable: 503439384 total in 70 blocks; 6804760 free (257 chunks);
> 496634624 used

I'd guess that a HashAgg operation ran out of memory ...

            regards, tom lane

Re: out of memory

From
martial.bizel@free.fr
Date:
Thanks for your response,

I've made this request :

SELECT query_string, DAY.ocu from search_data.query_string,
 (SELECT SUM(occurence) as ocu, query
FROM daily.queries_detail_statistics
 WHERE date >= '2006-01-01' AND date <= '2006-01-30'
 AND portal IN (1,2)
 GROUP BY query
 ORDER BY ocu DESC
 LIMIT 1000) as DAY
 WHERE DAY.query=id;

and after few minutes, i've error "out of memory" with this execution plan :
Nested Loop  (cost=8415928.63..8418967.13 rows=1001 width=34)
   ->  Subquery Scan "day"  (cost=8415928.63..8415941.13 rows=1000 width=16)
         ->  Limit  (cost=8415928.63..8415931.13 rows=1000 width=12)
               ->  Sort  (cost=8415928.63..8415932.58 rows=1582 width=12)
                     Sort Key: sum(occurence)
                     ->  HashAggregate  (cost=8415840.61..8415844.56 rows=1582
width=12)
                           ->  Seq Scan on queries_detail_statistics
(cost=0.00..8414056.00 rows=356922 width=12)
                                 Filter: ((date >= '2006-01-01'::date) AND (date
<= '2006-01-30'::date) AND (((portal)::text = '1'::text) OR ((portal)::text =
'2'::text)))
   ->  Index Scan using query_string_pkey on query_string  (cost=0.00..3.01
rows=1 width=34)
         Index Cond: ("outer".query = query_string.id)
(10 rows)

if HashAgg operation ran out of memory, what can i do ?

thanks a lot
martial

> martial.bizel@free.fr writes:
> > I've error "out of memory" with these traces :
>
> Doing what?
>
> > AggContext: -1976573952 total in 287 blocks; 25024 free (414 chunks);
> > -1976598976 used
> > DynaHashTable: 503439384 total in 70 blocks; 6804760 free (257 chunks);
> > 496634624 used
>
> I'd guess that a HashAgg operation ran out of memory ...
>
>             regards, tom lane
>
> ---------------------------(end of broadcast)---------------------------
> TIP 1: if posting/reading through Usenet, please send an appropriate
>        subscribe-nomail command to majordomo@postgresql.org so that your
>        message can get through to the mailing list cleanly
>



Re: out of memory

From
Scott Marlowe
Date:
On Tue, 2006-02-14 at 10:03, martial.bizel@free.fr wrote:
> Thanks for your response,

SNIP

> if HashAgg operation ran out of memory, what can i do ?

1: Don't top post.

2: Have you run analyze? Normally when hash agg runs out of memory, the
planner THOUGHT the hash agg would fit in memory, but it was larger than
expected.  This is commonly a problem when you haven't run analyze.

Re: out of memory

From
martial.bizel@free.fr
Date:
Yes, I've launched ANALYZE command before sending request.
I precise that's postgres version is 7.3.4

> On Tue, 2006-02-14 at 10:03, martial.bizel@free.fr wrote:
> > Thanks for your response,
>
> SNIP
>
> > if HashAgg operation ran out of memory, what can i do ?
>
> 1: Don't top post.
>
> 2: Have you run analyze? Normally when hash agg runs out of memory, the
> planner THOUGHT the hash agg would fit in memory, but it was larger than
> expected.  This is commonly a problem when you haven't run analyze.
>
> ---------------------------(end of broadcast)---------------------------
> TIP 5: don't forget to increase your free space map settings
>



Re: out of memory

From
Scott Marlowe
Date:
On Tue, 2006-02-14 at 10:15, martial.bizel@free.fr wrote:
> Yes, I've launched ANALYZE command before sending request.
> I precise that's postgres version is 7.3.4

So what does explain analyze show for this query, if anything?  Can you
increase your sort_mem or shared_buffers (I forget which hash_agg uses
off the top of my head...) if necessary to make it work.  Note you can
increase sort_mem on the fly for a given connection.

Re: out of memory

From
martial.bizel@free.fr
Date:
command explain analyze crash with the "out of memory" error

I precise that I've tried a lot of values from parameters shared_buffer and
sort_mem

now, in config file, values are :
sort_mem=32768
and shared_buffer=30000

server has 4Go RAM.
and kernel.shmmax=307200000



> On Tue, 2006-02-14 at 10:15, martial.bizel@free.fr wrote:
> > Yes, I've launched ANALYZE command before sending request.
> > I precise that's postgres version is 7.3.4
>
> So what does explain analyze show for this query, if anything?  Can you
> increase your sort_mem or shared_buffers (I forget which hash_agg uses
> off the top of my head...) if necessary to make it work.  Note you can
> increase sort_mem on the fly for a given connection.
>
> ---------------------------(end of broadcast)---------------------------
> TIP 5: don't forget to increase your free space map settings
>



Re: out of memory

From
Scott Marlowe
Date:
On Tue, 2006-02-14 at 10:32, martial.bizel@free.fr wrote:
> command explain analyze crash with the "out of memory" error
>
> I precise that I've tried a lot of values from parameters shared_buffer and
> sort_mem
>
> now, in config file, values are :
> sort_mem=32768
> and shared_buffer=30000

OK, on the command line, try increasing the sort_mem until hash_agg can
work.  With a 4 gig machine, you should be able to go as high as needed
here, I'd think.  Try as high as 500000 or so or more.  Then when
explain analyze works, compare the actual versus estimated number of
rows.

Re: out of memory

From
Tom Lane
Date:
martial.bizel@free.fr writes:
> Yes, I've launched ANALYZE command before sending request.
> I precise that's postgres version is 7.3.4

Can't possibly be 7.3.4, that version didn't have HashAggregate.

How many distinct values of "query" actually exist in the table?

            regards, tom lane

Re: out of memory

From
Scott Marlowe
Date:
On Tue, 2006-02-14 at 11:36, Tom Lane wrote:
> martial.bizel@free.fr writes:
> > Yes, I've launched ANALYZE command before sending request.
> > I precise that's postgres version is 7.3.4
>
> Can't possibly be 7.3.4, that version didn't have HashAggregate.
>
> How many distinct values of "query" actually exist in the table?

I thought that looked odd.

Re: out of memory

From
martial.bizel@free.fr
Date:
You're right, release is 7.4.7.

there's twenty millions records "query"

> On Tue, 2006-02-14 at 11:36, Tom Lane wrote:
> > martial.bizel@free.fr writes:
> > > Yes, I've launched ANALYZE command before sending request.
> > > I precise that's postgres version is 7.3.4
> >
> > Can't possibly be 7.3.4, that version didn't have HashAggregate.
> >
> > How many distinct values of "query" actually exist in the table?
>
> I thought that looked odd.
>
> ---------------------------(end of broadcast)---------------------------
> TIP 5: don't forget to increase your free space map settings
>



Re: out of memory

From
martial.bizel@free.fr
Date:
Good morning,

I've increased sort_mem until 2Go !!
and the error "out of memory" appears again.

Here the request I try to pass with her explain plan,

 Nested Loop  (cost=2451676.23..2454714.73 rows=1001 width=34)
   ->  Subquery Scan "day"  (cost=2451676.23..2451688.73 rows=1000 width=16)
         ->  Limit  (cost=2451676.23..2451678.73 rows=1000 width=12)
               ->  Sort  (cost=2451676.23..2451684.63 rows=3357 width=12)
                     Sort Key: sum(occurence)
                     ->  HashAggregate  (cost=2451471.24..2451479.63 rows=3357
width=12)
                           ->  Index Scan using test_date on
queries_detail_statistics  (cost=0.00..2449570.55 rows=380138 width=12)
                                 Index Cond: ((date >= '2006-01-01'::date) AND
(date <= '2006-01-30'::date))
                                 Filter: (((portal)::text = '1'::text) OR
((portal)::text = '2'::text))
   ->  Index Scan using query_string_pkey on query_string  (cost=0.00..3.01
rows=1 width=34)
         Index Cond: ("outer".query = query_string.id)
(11 rows)

Any new ideas ?,
thanks

MB.




> On Tue, 2006-02-14 at 10:32, martial.bizel@free.fr wrote:
> > command explain analyze crash with the "out of memory" error
> >
> > I precise that I've tried a lot of values from parameters shared_buffer and
> > sort_mem
> >
> > now, in config file, values are :
> > sort_mem=32768
> > and shared_buffer=30000
>
> OK, on the command line, try increasing the sort_mem until hash_agg can
> work.  With a 4 gig machine, you should be able to go as high as needed
> here, I'd think.  Try as high as 500000 or so or more.  Then when
> explain analyze works, compare the actual versus estimated number of
> rows.
>