Hi,
On Fri, Feb 08, 2019 at 02:11:33PM -0700, PegoraroF10 wrote:
> *Well, now we have two queries which stops completelly our postgres server.
> That problem occurs on 10.6 and 11.1 versions.
> On both server the problem is the same.
> Linux logs of old crash are:*
> Feb 1 18:39:53 fx-cloudserver kernel: [ 502.405788] show_signal_msg: 5
> callbacks suppressedFeb 1 18:39:53 fx-cloudserver kernel: [ 502.405791]
> postgres[10195]: segfault at 24 ip 0000555dc6a71cb0 sp 00007ffc5f91db38
> error 4 in postgres[555dc69b4000+6db000]
"segfault" seems to mean you hit a bug, which we'll want more information to
diagnose. Could you install debugging symbols ? Ubuntu calls their package
postgresql-10-dbg or similar. And start server with coredumps enabled, using
pg_ctl -c -D /var/lib/postgresql/10/main (or similar). Then trigger the query
and hope to find a core dump in the data directory. Or possibly it'll be
processed into /var/crash by apport daemon, depending if that's running and
enabled (check /proc/sys/kernel/core_pattern).
https://wiki.postgresql.org/wiki/Getting_a_stack_trace_of_a_running_PostgreSQL_backend_on_Linux/BSD#Getting_a_trace_from_a_randomly_crashing_backend
> *Linux Log of new crash, which takes several minutes to stop:*
> Feb 8 15:06:40 fxReplicationServer kernel: [1363901.643121] postgres
> invoked oom-killer: gfp_mask=0x24280ca, order=0, oom_score_adj=0Feb 8
> fxReplicationServer kernel: [1363901.643368] Killed process 9399 (postgres)
> total-vm:16518496kB, anon-rss:11997448kB, file-rss:38096kBFeb 8 17:21:16
> fxReplicationServer kernel: [1371977.845728] postgres[10321]: segfault at 10
> ip 00005567a6069752 sp 00007ffed70be970 error 4 in
In this case, you ran out of RAM, as you noticed. You should make sure ANALYZE
statistics are up to date (including manually ANALYZEing any parent tables).
On Sun, Feb 03, 2019 at 09:05:42AM -0700, PegoraroF10 wrote:
> I´m using Postgres 10 on ubuntu in a Google VM (8 cores, 32Gb RAM, 250Gb SSD)
> and DB has 70GB
What is work_mem setting ?
You could try running the query with lower work_mem, or check EXPLAIN ANALYZE
and try to resolve any bad rowcount estimates.
> *This query runs for aproximately 5 minutes. See link above with images and
> logs and you´ll see how memory will grow. Memory use starts with 8gb e grows
> until use them all. When all memory is in use then it starts to swap. When
> all swap is allocated then it gets the "out of memory" and stops
> completelly.
> on: https://drive.google.com/open?id=18zIvkV3ew4aZ1_cxI-EmIPVql7ydvEwi*
It says "Access Denied", perhaps you could send a link to
https://explain.depesz.com/ ?
Justin