The following bug has been logged on the website:
Bug reference: 14100
Logged by: Nikolay
Email address: Nikolay.Nikitin@infowatch.com
PostgreSQL version: 9.5.2
Operating system: Red Hat server 6.7
Description:
If we generate big value with size X then server process takes 370M (empty
session process memory size) + X.
select octet_length(string_agg(gen_random_bytes(1024), null::bytea)::bytea)
from generate_series(1,500 * 1024);
If we insert big value with size X in the usual table then server process
takes 370M + 3 * X.
create table test
(
tablespace_id numeric,
b bytea
);
create table test_1() inherits (test);
insert into test_1(tablespace_id, b)
select 1, string_agg(gen_random_bytes(1024), null::bytea)::bytea b
from generate_series(1,500 * 1024);
If we insert big value with size X in the partitioned table then server
process takes 370M + 6 * X.
create or replace function trg_fnc_test() returns trigger as $$
begin
execute 'insert into test_' || new.tablespace_id || '(tablespace_id, b)
values($1, $2)'
using new.tablespace_id, new.b;
return null;
end;$$ language plpgsql;
create trigger trg_test_before_insert before insert on test for each row
execute procedure trg_fnc_test();
insert into test(tablespace_id, b)
select 1, string_agg(gen_random_bytes(1024), null::bytea)::bytea b
from generate_series(1,500 * 1024);
I see two bugs:
1. Server proccesses take memory from server memory without any limits.
10 concurrent processes which insert 1G values take 63G memory if it exists.
Or it will generate error if a memory is end.
2. Size 370M + 6 * X is very big. Insert of 1G value will take over 6G.
Can you reduce memory consumption to 2X or smaller in these cases?