Thread: Invalid memory alloc
Hello, I'm processing a 100Million row table. I get error message about memory and I'ld like to know what can cause this issue. ... psql:/home/ubuntu/create_topo.sql:12: NOTICE: 104855000 edges processed psql:/home/ubuntu/create_topo.sql:12: NOTICE: 104856000 edges processed psql:/home/ubuntu/create_topo.sql:12: NOTICE: 104857000 edges processed psql:/home/ubuntu/create_topo.sql:12: NOTICE: invalid memory alloc request size 1677721600 psql:/home/ubuntu/create_topo.sql:12: NOTICE: UPDATE public.way_noded SET source = 88374866,target = 88362922 WHERE id = 142645362 pgr_createtopology -------------------- FAIL (1 row) The server has a 10Gb of shared_buffer. Do you thing this quantity of memory allowed should normaly be enough to process the data? Thanks Marc
On 23-04-2015 16:55, Marc-André Goderre wrote: > Hello, I'm processing a 100Million row table. > I get error message about memory and I'ld like to know what can cause this issue. > > ... > psql:/home/ubuntu/create_topo.sql:12: NOTICE: 104855000 edges processed > psql:/home/ubuntu/create_topo.sql:12: NOTICE: 104856000 edges processed > psql:/home/ubuntu/create_topo.sql:12: NOTICE: 104857000 edges processed > psql:/home/ubuntu/create_topo.sql:12: NOTICE: invalid memory alloc request size 1677721600 > psql:/home/ubuntu/create_topo.sql:12: NOTICE: UPDATE public.way_noded SET source = 88374866,target = 88362922 WHERE id= 142645362 > pgr_createtopology > -------------------- > FAIL > (1 row) > > The server has a 10Gb of shared_buffer. > Do you thing this quantity of memory allowed should normaly be enough to process the data? > > Thanks > Marc > > My question would sound stupid... you have 10Gb shared buffer, but how much physical memory on this server? How have you configured the kernel swappines, overcommit_memoryt, overcommit_ratio? Have you set anything different in shmmax or shmall? Edson
> Hello, I'm processing a 100Million row table. > I get error message about memory and I'ld like to know what can cause this issue. > > ... > psql:/home/ubuntu/create_topo.sql:12: NOTICE: 104855000 edges processed > psql:/home/ubuntu/create_topo.sql:12: NOTICE: 104856000 edges processed > psql:/home/ubuntu/create_topo.sql:12: NOTICE: 104857000 edges processed > psql:/home/ubuntu/create_topo.sql:12: NOTICE: invalid memory alloc request size 1677721600 > psql:/home/ubuntu/create_topo.sql:12: NOTICE: UPDATE public.way_noded SET source = 88374866,target = 88362922 WHERE id= 142645362 > pgr_createtopology > -------------------- > FAIL > (1 row) > > The server has a 10Gb of shared_buffer. > Do you thing this quantity of memory allowed should normaly be enough to process the data? > > Thanks > Marc Hi, what version of Postgres are you using? Any extensions? I guess you're using PostGIS, right? This error indicates something is trying to allocate 1600 MB of memory in the backend - that should never happen, as data chunks that are larger than 1 GB are broken down in smaller pieces. I hope you're not suffering data corruption, what happens if you do select * from public.way_noded where id = 142645362 ? Any other hints? Log messages (from Linux or the postgres backend)? Bye, Chris.
On 4/23/15 3:15 PM, Edson Richter wrote: > My question would sound stupid... you have 10Gb shared buffer, but how > much physical memory on this server? > How have you configured the kernel swappines, overcommit_memoryt, > overcommit_ratio? > Have you set anything different in shmmax or shmall? I don't think this is an OS thing at all. That error messages is pretty associated with this code: #define MaxAllocSize ((Size) 0x3fffffff) /* 1 gigabyte - 1 */ #define AllocSizeIsValid(size) ((Size) (size) <= MaxAllocSize) If malloc failed you'd get an actual out of memory error, not a notice. We need more information from the OP about what they're doing. -- Jim Nasby, Data Architect, Blue Treble Consulting Data in Trouble? Get it in Treble! http://BlueTreble.com
Jim Nasby <Jim.Nasby@BlueTreble.com> writes: > We need more information from the OP about what they're doing. Yeah. Those NOTICEs about "nnn edges processed" are not coming out of anything in core Postgres; I'll bet whatever is producing those is at fault (by trying to palloc indefinitely-large amounts of memory as a single chunk). regards, tom lane
Postgresql is running on a 30 Gb of memory server. I should have access to the server in few minutes and I'll give you more information about the other spec. Thanks Marc -----Message d'origine----- De : pgsql-general-owner@postgresql.org [mailto:pgsql-general-owner@postgresql.org] De la part de Edson Richter Envoyé : 23 avril 2015 16:16 À : pgsql-general@postgresql.org Objet : Re: [GENERAL] Invalid memory alloc On 23-04-2015 16:55, Marc-André Goderre wrote: > Hello, I'm processing a 100Million row table. > I get error message about memory and I'ld like to know what can cause this issue. > > ... > psql:/home/ubuntu/create_topo.sql:12: NOTICE: 104855000 edges > processed > psql:/home/ubuntu/create_topo.sql:12: NOTICE: 104856000 edges > processed > psql:/home/ubuntu/create_topo.sql:12: NOTICE: 104857000 edges > processed > psql:/home/ubuntu/create_topo.sql:12: NOTICE: invalid memory alloc > request size 1677721600 > psql:/home/ubuntu/create_topo.sql:12: NOTICE: UPDATE public.way_noded SET source = 88374866,target = 88362922 WHERE id= 142645362 > pgr_createtopology > -------------------- > FAIL > (1 row) > > The server has a 10Gb of shared_buffer. > Do you thing this quantity of memory allowed should normaly be enough to process the data? > > Thanks > Marc > > My question would sound stupid... you have 10Gb shared buffer, but how much physical memory on this server? How have you configured the kernel swappines, overcommit_memoryt, overcommit_ratio? Have you set anything different in shmmax or shmall? Edson -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Server setting kernel.shmmax = 13322600448 kernel.shmall = 4194304 vm.swappiness = 60 vm.overcommit_memory = 0 vm.overcommit_ratio = 50 Postgresql setting effective_cache_size | 128MB checkpoint_segments | 64 maintenance_work_mem | 5GB segment_size | 1GB shared_buffers | 10GB work_mem | 256MB I use Postgis and PGrouting extension. The error come when I use a pgrouting function pgr_createtopology() Marc Marc-André Goderre Analyste en informatique Courriel: magoderre@cgq.qc.ca Tel.: (418) 698-5995 poste 1628 Téléc.: (418) 698-4108 Cégep de Chicoutimi, 534 rue Jacques-Cartier Est, Chicoutimi (Québec) G7H 1Z6 Visitez notre site web au : www.cgq.qc.ca Suivez-nous sur Twitter : @lecgq AVIS DE CONFIDENTIALITÉ : Le présent courriel peut contenir des renseignements confidentiels s’adressant qu’au destinataire ou à celui ont le nom figureci-dessus. Si ce courriel vous est parvenu par mégarde, veuillez le supprimer et nous en aviser aussitôt par téléphone.Merci. Pensez à l'environnement avant d'imprimer ce message -----Message d'origine----- De : pgsql-general-owner@postgresql.org [mailto:pgsql-general-owner@postgresql.org] De la part de Edson Richter Envoyé : 23 avril 2015 16:16 À : pgsql-general@postgresql.org Objet : Re: [GENERAL] Invalid memory alloc On 23-04-2015 16:55, Marc-André Goderre wrote: > Hello, I'm processing a 100Million row table. > I get error message about memory and I'ld like to know what can cause this issue. > > ... > psql:/home/ubuntu/create_topo.sql:12: NOTICE: 104855000 edges > processed > psql:/home/ubuntu/create_topo.sql:12: NOTICE: 104856000 edges > processed > psql:/home/ubuntu/create_topo.sql:12: NOTICE: 104857000 edges > processed > psql:/home/ubuntu/create_topo.sql:12: NOTICE: invalid memory alloc > request size 1677721600 > psql:/home/ubuntu/create_topo.sql:12: NOTICE: UPDATE public.way_noded SET source = 88374866,target = 88362922 WHERE id= 142645362 > pgr_createtopology > -------------------- > FAIL > (1 row) > > The server has a 10Gb of shared_buffer. > Do you thing this quantity of memory allowed should normaly be enough to process the data? > > Thanks > Marc > > My question would sound stupid... you have 10Gb shared buffer, but how much physical memory on this server? How have you configured the kernel swappines, overcommit_memoryt, overcommit_ratio? Have you set anything different in shmmax or shmall? Edson -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
> I use Postgis and PGrouting extension. > The error come when I use a pgrouting function pgr_createtopology() It appears pgrouting violates the 1GB per chunk limit in the postgres backend when processing large datasets: https://github.com/pgRouting/pgrouting/issues/291 Bye, Chris.
Can I change the segment size to allow more memory? Is it a good idea? The concerned function work only on the entire table then I can't process a part of it. Should I split the table in multiple table and merge them after the process? Thanks Marc -----Message d'origine----- De : Chris Mair [mailto:chris@1006.org] Envoyé : 24 avril 2015 15:41 À : Marc-André Goderre; pgsql-general@postgresql.org Objet : Re: [GENERAL] Invalid memory alloc > I use Postgis and PGrouting extension. > The error come when I use a pgrouting function pgr_createtopology() It appears pgrouting violates the 1GB per chunk limit in the postgres backend when processing large datasets: https://github.com/pgRouting/pgrouting/issues/291 Bye, Chris.
On 4/27/15 8:45 AM, Marc-André Goderre wrote: > Can I change the segment size to allow more memory? > Is it a good idea? > The concerned function work only on the entire table then I can't process a part of it. > Should I split the table in multiple table and merge them after the process? Please don't top-post. >> I use Postgis and PGrouting extension. >> The error come when I use a pgrouting function pgr_createtopology() > > It appears pgrouting violates the 1GB per chunk limit in the postgres backend when processing large datasets: > > https://github.com/pgRouting/pgrouting/issues/291 Changing the segment size would just push the problem down the road. At some point the same error will happen. That issue URL has a comment about "Don't try and process all of Europe at once, give it a bounding box", so that's one possible solution. Really the function should be changed so it doesn't trying and palloc more than 1G in a single go... but OTOH there's only so far you can probably go there too. I imagine the complexity of what the function is trying to do grows geometrically with the size of the data set, so you probably need to find some way to break your data into smaller pieces and process each piece individually. -- Jim Nasby, Data Architect, Blue Treble Consulting Data in Trouble? Get it in Treble! http://BlueTreble.com