Re: Cluster::restart dumping logs when stop fails - Mailing list pgsql-hackers

From Andres Freund
Subject Re: Cluster::restart dumping logs when stop fails
Date
Msg-id 20240407170114.nm2ww5xwekintha5@awork3.anarazel.de
Whole thread Raw
In response to Re: Cluster::restart dumping logs when stop fails  (Daniel Gustafsson <daniel@yesql.se>)
List pgsql-hackers
On 2024-04-07 18:51:40 +0200, Daniel Gustafsson wrote:
> > On 7 Apr 2024, at 18:28, Andres Freund <andres@anarazel.de> wrote:
> > 
> > On 2024-04-07 16:52:05 +0200, Daniel Gustafsson wrote:
> >>> On 7 Apr 2024, at 14:51, Andrew Dunstan <andrew@dunslane.net> wrote:
> >>> On 2024-04-06 Sa 20:49, Andres Freund wrote:
> >> 
> >>>> That's probably unnecessary optimization, but it seems a tad silly to read an
> >>>> entire, potentially sizable, file to just use the last 1k. Not sure if the way
> >>>> slurp_file() uses seek supports negative ofsets, the docs read to me like that
> >>>> may only be supported with SEEK_END.
> >>> 
> >>> We should enhance slurp_file() so it uses SEEK_END if the offset is negative.
> >> 
> >> Absolutely agree.  Reading the thread I think Andres argues for not printing
> >> anything at all in this case but we should support negative offsets anyways, it
> >> will fort sure come in handy.
> > 
> > I'm ok with printing path + some content or just the path.
> 
> I think printing the last 512 bytes or so would be a good approach, I'll take
> care of it later tonight. That would be a backpatchable change IMHO.

+1 - thanks for quickly improving this.



pgsql-hackers by date:

Previous
From: Alexander Lakhin
Date:
Subject: Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column ordered scans, skip scan
Next
From: Peter Geoghegan
Date:
Subject: Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column ordered scans, skip scan