Thread: Massive update, memory usage

Massive update, memory usage

From
Trenta sis
Date:

Hi,

I have a Linux Server (Debian) with Postgres 8.3 and I have problems with a massive update, about 400000 updates/inserts.
If I execute about 100000 it seems all ok, but when I execute 400000, I have the same problem with or without a transaction (I need to do with a transaction) increase memory usage and disk usage.
With a execution of 400.000 inserts/update server begin woring well, but after 100 seconds of executions increase usage of RAM, and then Swap and finally all RAM and swap are used and execution can't finish.
I have made some tuning in server, I have modified:
-shared_buffers 1024 Mb
-work_mem 512 Mb
-effective_cache_size 2048Mb
-random_page_cost 2.0
-checkpoint_segments 64
-wal_buffers 8Mb
-max_prepared_transaction 100
-synchronous_commit off

what is wrong in this configuration to executes this inserts/update?

Server has: 4Gb RAM, 3GB Swap and SATA Disk with RAID5


Thanks

#avg_ls_inline_popup { position:absolute; z-index:9999; padding: 0px 0px; margin-left: 0px; margin-top: 0px; width: 240px; overflow: hidden; word-wrap: break-word; color: black; font-size: 10px; text-align: left; line-height: 13px;}

Re: Massive update, memory usage

From
Andreas Kretschmer
Date:
Trenta sis <trenta.sis@gmail.com> wrote:

>
> Hi,
>
> I have a Linux Server (Debian) with Postgres 8.3 and I have problems with a
> massive update, about 400000 updates/inserts.

Updates or Inserts?


> If I execute about 100000 it seems all ok, but when I execute 400000, I have
> the same problem with or without a transaction (I need to do with a
> transaction) increase memory usage and disk usage.
> With a execution of 400.000 inserts/update server begin woring well, but after
> 100 seconds of executions increase usage of RAM, and then Swap and finally all
> RAM and swap are used and execution can't finish.
> I have made some tuning in server, I have modified:
> -shared_buffers 1024 Mb
> -work_mem 512 Mb

Way too high, but that's not the problem here... (i guess, depends on
the real query, see below about explain analyse)

> -effective_cache_size 2048Mb

You have 4GB, but you are defined only 1 GByte for shared_mem and you
have defined only 2GB for shared_mem and os-cache together. What about
the other 2 GByte?


> -random_page_cost 2.0

you have changed the default, why?


> -checkpoint_segments 64
> -wal_buffers 8Mb
> -max_prepared_transaction 100
> -synchronous_commit off
>
> what is wrong in this configuration to executes this inserts/update?

Hard to guess, can you provide the output generated from
EXPLAIN ANALYSE <your query>?


>
> Server has: 4Gb RAM, 3GB Swap and SATA Disk with RAID5

RAID5 isn't a good choise for a database server...


Andreas
--
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect.                              (Linus Torvalds)
"If I was god, I would recompile penguin with --enable-fly."   (unknown)
Kaufbach, Saxony, Germany, Europe.              N 51.05082°, E 13.56889°

Re: Massive update, memory usage

From
Craig Ringer
Date:
On 10/28/2010 02:38 AM, Trenta sis wrote:
>
> Hi,
>
> I have a Linux Server (Debian) with Postgres 8.3 and I have problems
> with a massive update, about 400000 updates/inserts.
> If I execute about 100000 it seems all ok, but when I execute 400000, I
> have the same problem with or without a transaction (I need to do with a
> transaction) increase memory usage and disk usage.
> With a execution of 400.000 inserts/update server begin woring well, but
> after 100 seconds of executions increase usage of RAM, and then Swap and
> finally all RAM and swap are used and execution can't finish.

Do you have lots of triggers on the table? Or foreign key relationships
that're DEFERRABLE ?

--
Craig Ringer

Massive update, memory usage

From
Trenta sis
Date:


There are about 100.000 inserts and 300000 updates. Without transaction it seems that works, but with a transaction no. Witt about only 300.000 updates it seems that can finish correctly, but last 20% is slow because is using swap...

Any tunning to do in this configuration or it is correct?

thanks

2010/10/28 Craig Ringer <craig@postnewspapers.com.au>

On 10/28/2010 02:38 AM, Trenta sis wrote:

Hi,

I have a Linux Server (Debian) with Postgres 8.3 and I have problems
with a massive update, about 400000 updates/inserts.
If I execute about 100000 it seems all ok, but when I execute 400000, I
have the same problem with or without a transaction (I need to do with a
transaction) increase memory usage and disk usage.
With a execution of 400.000 inserts/update server begin woring well, but
after 100 seconds of executions increase usage of RAM, and then Swap and
finally all RAM and swap are used and execution can't finish.

Do you have lots of triggers on the table? Or foreign key relationships that're DEFERRABLE ?

--
Craig Ringer


Re: Massive update, memory usage

From
Cédric Villemain
Date:
2010/10/28 Trenta sis <trenta.sis@gmail.com>:
>
>
> There are about 100.000 inserts and 300000 updates. Without transaction it
> seems that works, but with a transaction no. Witt about only 300.000 updates
> it seems that can finish correctly, but last 20% is slow because is using
> swap...
>
> Any tunning to do in this configuration or it is correct?

You should post your queries, and tables definitions involved.

>
> thanks
>
> 2010/10/28 Craig Ringer <craig@postnewspapers.com.au>
>>
>> On 10/28/2010 02:38 AM, Trenta sis wrote:
>>>
>>> Hi,
>>>
>>> I have a Linux Server (Debian) with Postgres 8.3 and I have problems
>>> with a massive update, about 400000 updates/inserts.
>>> If I execute about 100000 it seems all ok, but when I execute 400000, I
>>> have the same problem with or without a transaction (I need to do with a
>>> transaction) increase memory usage and disk usage.
>>> With a execution of 400.000 inserts/update server begin woring well, but
>>> after 100 seconds of executions increase usage of RAM, and then Swap and
>>> finally all RAM and swap are used and execution can't finish.
>>
>> Do you have lots of triggers on the table? Or foreign key relationships
>> that're DEFERRABLE ?
>>
>> --
>> Craig Ringer
>
>
>



--
Cédric Villemain               2ndQuadrant
http://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support

Re: Massive update, memory usage

From
Trenta sis
Date:
Well, I have solved executing with more RAM, and then works correctly

Thanks



2010/10/28 Cédric Villemain <cedric.villemain.debian@gmail.com>
2010/10/28 Trenta sis <trenta.sis@gmail.com>:
>
>
> There are about 100.000 inserts and 300000 updates. Without transaction it
> seems that works, but with a transaction no. Witt about only 300.000 updates
> it seems that can finish correctly, but last 20% is slow because is using
> swap...
>
> Any tunning to do in this configuration or it is correct?

You should post your queries, and tables definitions involved.

>
> thanks
>
> 2010/10/28 Craig Ringer <craig@postnewspapers.com.au>
>>
>> On 10/28/2010 02:38 AM, Trenta sis wrote:
>>>
>>> Hi,
>>>
>>> I have a Linux Server (Debian) with Postgres 8.3 and I have problems
>>> with a massive update, about 400000 updates/inserts.
>>> If I execute about 100000 it seems all ok, but when I execute 400000, I
>>> have the same problem with or without a transaction (I need to do with a
>>> transaction) increase memory usage and disk usage.
>>> With a execution of 400.000 inserts/update server begin woring well, but
>>> after 100 seconds of executions increase usage of RAM, and then Swap and
>>> finally all RAM and swap are used and execution can't finish.
>>
>> Do you have lots of triggers on the table? Or foreign key relationships
>> that're DEFERRABLE ?
>>
>> --
>> Craig Ringer
>
>
>



--
Cédric Villemain               2ndQuadrant
http://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support

#avg_ls_inline_popup { position:absolute; z-index:9999; padding: 0px 0px; margin-left: 0px; margin-top: 0px; width: 240px; overflow: hidden; word-wrap: break-word; color: black; font-size: 10px; text-align: left; line-height: 13px;}

Re: Massive update, memory usage

From
Emanuele Bracci Poste
Date:
Scusa, scadenze a parte, ma non vi è sembrato il caso di chiedere a chi sta gestendo il progetto prima di rimuovere una risorsa?
Grazie comunque.
Emanuele

Il giorno 28/ott/2010, alle ore 23.48, Trenta sis ha scritto:

Well, I have solved executing with more RAM, and then works correctly

Thanks



2010/10/28 Cédric Villemain <cedric.villemain.debian@gmail.com>
2010/10/28 Trenta sis <trenta.sis@gmail.com>:
>
>
> There are about 100.000 inserts and 300000 updates. Without transaction it
> seems that works, but with a transaction no. Witt about only 300.000 updates
> it seems that can finish correctly, but last 20% is slow because is using
> swap...
>
> Any tunning to do in this configuration or it is correct?

You should post your queries, and tables definitions involved.

>
> thanks
>
> 2010/10/28 Craig Ringer <craig@postnewspapers.com.au>
>>
>> On 10/28/2010 02:38 AM, Trenta sis wrote:
>>>
>>> Hi,
>>>
>>> I have a Linux Server (Debian) with Postgres 8.3 and I have problems
>>> with a massive update, about 400000 updates/inserts.
>>> If I execute about 100000 it seems all ok, but when I execute 400000, I
>>> have the same problem with or without a transaction (I need to do with a
>>> transaction) increase memory usage and disk usage.
>>> With a execution of 400.000 inserts/update server begin woring well, but
>>> after 100 seconds of executions increase usage of RAM, and then Swap and
>>> finally all RAM and swap are used and execution can't finish.
>>
>> Do you have lots of triggers on the table? Or foreign key relationships
>> that're DEFERRABLE ?
>>
>> --
>> Craig Ringer
>
>
>



--
Cédric Villemain               2ndQuadrant
http://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support

#avg_ls_inline_popup { position:absolute; z-index:9999; padding: 0px 0px; margin-left: 0px; margin-top: 0px; width: 240px; overflow: hidden; word-wrap: break-word; color: black; font-size: 10px; text-align: left; line-height: 13px;}