Thread: PostGreSQL 8.1.0 : out of memory during vacuum full analyze

PostGreSQL 8.1.0 : out of memory during vacuum full analyze

From
DANTE ALEXANDRA
Date:
Hello,

This is my first posting in this mailing list and I am a PostGreSQL newbie.
I've posted this message in the "novice" mailing list last friday, but I
have any response.
I hope somenone could help me.

I realize a benchmarck on PostGreSQL on a 300Gb database. I used the
release 8.1.0.

I have created my database, which contains 8 tables, loaded each table
and created the appropriate indexes.
Then, on each table, I have launched the "VACUUM FULL ANALYZE" command
as a non-root user.

This command failed on the last table, the biggest, called "lineitem"
which contains 1799989091 rows (near 300 Gb of datas).
This description of this table is :
TPCH=# \d lineitem
               Table "public.lineitem"
     Column      |         Type          | Modifiers
-----------------+-----------------------+-----------
 l_orderkey      | bigint                | not null
 l_partkey       | bigint                | not null
 l_suppkey       | bigint                | not null
 l_linenumber    | bigint                | not null
 l_quantity      | numeric               |
 l_extendedprice | numeric               |
 l_discount      | numeric               |
 l_tax           | numeric               | not null
 l_returnflag    | character(1)          |
 l_linestatus    | character(1)          |
 l_shipdate      | date                  |
 l_commitdate    | date                  |
 l_receiptdate   | date                  |
 l_shipinstruct  | character(25)         |
 l_shipmode      | character(10)         |
 l_comment       | character varying(44) |
Indexes:
    "i_l_orderkey" btree (l_orderkey), tablespace "tb_index"
Tablespace: "tb_lit"


Three hours after having launched the command, I've got this error :<>

   <>TPCH=# VACUUM FULL VERBOSE ANALYZE lineitem;
   <>INFO:  vacuuming "public.lineitem"
   <>*ERROR:  out of memory*
   DETAIL:  Failed on request of size 67108864.

I do not try to launch again a vacuum full analyze on this table.

Can someone tell me why this error appears ?
Thank you for your help.

Regards,
Alexandra DANTE

Re: PostGreSQL 8.1.0 : out of memory during vacuum full analyze

From
Pandurangan R S
Date:
Hi,

you need to increase maintenance_work_mem in postgresql.conf file.

http://www.powerpostgresql.com/Downloads/annotated_conf_80.html

Regards
Pandu

On 12/12/05, DANTE ALEXANDRA <ALEXANDRA.DANTE@bull.net> wrote:
> Hello,
>
> This is my first posting in this mailing list and I am a PostGreSQL newbie.
> I've posted this message in the "novice" mailing list last friday, but I
> have any response.
> I hope somenone could help me.
>
> I realize a benchmarck on PostGreSQL on a 300Gb database. I used the
> release 8.1.0.
>
> I have created my database, which contains 8 tables, loaded each table
> and created the appropriate indexes.
> Then, on each table, I have launched the "VACUUM FULL ANALYZE" command
> as a non-root user.
>
> This command failed on the last table, the biggest, called "lineitem"
> which contains 1799989091 rows (near 300 Gb of datas).
> This description of this table is :
> TPCH=# \d lineitem
>                Table "public.lineitem"
>      Column      |         Type          | Modifiers
> -----------------+-----------------------+-----------
>  l_orderkey      | bigint                | not null
>  l_partkey       | bigint                | not null
>  l_suppkey       | bigint                | not null
>  l_linenumber    | bigint                | not null
>  l_quantity      | numeric               |
>  l_extendedprice | numeric               |
>  l_discount      | numeric               |
>  l_tax           | numeric               | not null
>  l_returnflag    | character(1)          |
>  l_linestatus    | character(1)          |
>  l_shipdate      | date                  |
>  l_commitdate    | date                  |
>  l_receiptdate   | date                  |
>  l_shipinstruct  | character(25)         |
>  l_shipmode      | character(10)         |
>  l_comment       | character varying(44) |
> Indexes:
>     "i_l_orderkey" btree (l_orderkey), tablespace "tb_index"
> Tablespace: "tb_lit"
>
>
> Three hours after having launched the command, I've got this error :<>
>
>    <>TPCH=# VACUUM FULL VERBOSE ANALYZE lineitem;
>    <>INFO:  vacuuming "public.lineitem"
>    <>*ERROR:  out of memory*
>    DETAIL:  Failed on request of size 67108864.
>
> I do not try to launch again a vacuum full analyze on this table.
>
> Can someone tell me why this error appears ?
> Thank you for your help.
>
> Regards,
> Alexandra DANTE
>
> ---------------------------(end of broadcast)---------------------------
> TIP 2: Don't 'kill -9' the postmaster
>

Re: PostGreSQL 8.1.0 : out of memory during vacuum full analyze

From
Tom Lane
Date:
DANTE ALEXANDRA <ALEXANDRA.DANTE@BULL.NET> writes:
> Then, on each table, I have launched the "VACUUM FULL ANALYZE" command
> as a non-root user.
> This command failed on the last table, the biggest, called "lineitem"
> which contains 1799989091 rows (near 300 Gb of datas).

VACUUM FULL requires workspace proportional to the number of blocks in
the table.  You probably need to bump up the kernel's per-process memory
limit (see ulimit and so on) if you want to VACUUM FULL such a large table.

My advice: you shouldn't be using VACUUM FULL anyway.  Quite aside from
the memory issue, it's likely to take forever and a day.

            regards, tom lane

Re: PostGreSQL 8.1.0 : out of memory during vacuum full

From
DANTE ALEXANDRA
Date:
Thank you for your answer.
Instead of  launching a VACUUM FULL ANALYZE on the "big" table, I will
try to use the "ALTER TABLE SET STATISTICS" command.

I will done this in a second time, during the optimization of  the
benchmark.

Best regards,
Alexandra DANTE


Tom Lane a écrit :

>DANTE ALEXANDRA <ALEXANDRA.DANTE@BULL.NET> writes:
>
>
>>Is it correct ? Why ANALYZE has taken less than one minute ?
>>
>>
>
>That's what it's supposed to do --- ANALYZE just samples the table,
>so it's fast even on very large tables.
>
>If you think you need better statistics, you could increase the
>statistics target for ANALYZE (see ALTER TABLE SET STATISTICS),
>but VACUUM FULL per se is not going to improve them at all.
>VACUUM and ANALYZE have entirely different purposes --- it's only
>historical happenstance that there is a combination command to do
>both.
>
>            regards, tom lane
>
>
>

DANTE ALEXANDRA a écrit :

> Thank you for your answer.
> To give you more explanations, first, I have launched the ANALYZE
> command on this big table, but the elasped time was very short. As my
> queries launched on this table were very slow, I have tried to launch
> the VACUUM FULL ANALYZE to update the statistics.
>
> Is it correct ? Why ANALYZE has taken less than one minute ?
>
> Thanks for your help.
>
> Regards
> Alexandra DANTE
>
>
> Tom Lane a écrit :
>
>> DANTE ALEXANDRA <ALEXANDRA.DANTE@BULL.NET> writes:
>>
>>
>>> Then, on each table, I have launched the "VACUUM FULL ANALYZE"
>>> command as a non-root user.
>>> This command failed on the last table, the biggest, called
>>> "lineitem" which contains 1799989091 rows (near 300 Gb of datas).
>>>
>>
>>
>> VACUUM FULL requires workspace proportional to the number of blocks in
>> the table.  You probably need to bump up the kernel's per-process memory
>> limit (see ulimit and so on) if you want to VACUUM FULL such a large
>> table.
>>
>> My advice: you shouldn't be using VACUUM FULL anyway.  Quite aside from
>> the memory issue, it's likely to take forever and a day.
>>
>>             regards, tom lane
>>
>> ---------------------------(end of broadcast)---------------------------
>> TIP 5: don't forget to increase your free space map settings
>>
>>
>>
>