Re: block-level incremental backup - Mailing list pgsql-hackers
From | Stephen Frost |
---|---|
Subject | Re: block-level incremental backup |
Date | |
Msg-id | 20190422180341.GL6197@tamriel.snowman.net Whole thread Raw |
In response to | Re: block-level incremental backup (Andres Freund <andres@anarazel.de>) |
List | pgsql-hackers |
Greetings, * Andres Freund (andres@anarazel.de) wrote: > On 2019-04-19 20:04:41 -0400, Stephen Frost wrote: > > I agree that we don't want another implementation and that there's a lot > > that we want to do to improve replay performance. We've already got > > frontend tools which work with multiple execution threads, so I'm not > > sure I get the "not easily feasible" bit, and the argument about the > > checkpointer seems largely related to that (as in- if we didn't have > > multiple threads/processes then things would perform quite badly... but > > we can and do have multiple threads/processes in frontend tools today, > > even in pg_basebackup). > > You need not just multiple execution threads, but basically a new > implementation of shared buffers, locking, process monitoring, with most > of the related infrastructure. You're literally talking about > reimplementing a very substantial portion of the backend. I'm not sure > I can transport in written words - via a public medium - how bad an idea > it would be to go there. Yes, there'd be some need for locking and process monitoring, though if we aren't supporting ongoing read queries at the same time, there's a whole bunch of things that we don't need from the existing backend. > > > Which I think is entirely reasonable. With the 'consistent' and LSN > > > recovery targets one already can get most of what's needed from such a > > > tool, anyway. I'd argue the biggest issue there is that there's no > > > equivalent to starting postgres with a private socket directory on > > > windows, and perhaps an option or two making it easier to start postgres > > > in a "private" mode for things like this. > > > > This would mean building in a way to do parallel WAL replay into the > > server binary though, as discussed above, and it seems like making that > > work in a way that allows us to still be available as a read-only > > standby would be quite a bit more difficult. We could possibly support > > parallel WAL replay only when we aren't a replica but from the same > > binary. > > I'm doubtful that we should try to implement parallel WAL apply that > can't support HS - a substantial portion of the the logic to avoid > issues around relfilenode reuse, consistency etc is going to be to be > necessary for non-HS aware apply anyway. But if somebody had a concrete > proposal for something that's fundamentally only doable without HS, I > could be convinced. I'd certainly prefer that we support parallel WAL replay *with* HS, that just seems like a much larger problem, but I'd be quite happy to be told that it wouldn't be that much harder. > > A lot of this part of the discussion feels like a tangent though, unless > > I'm missing something. > > I'm replying to: > > On 2019-04-17 18:43:10 -0400, Stephen Frost wrote: > > Wow. I have to admit that I feel completely opposite of that- I'd > > *love* to have an independent tool (which ideally uses the same code > > through the common library, or similar) that can be run to apply WAL. > > And I'm basically saying that anything that starts from this premise is > fatally flawed (in the ex falso quodlibet kind of sense ;)). I'd just say that it'd be... difficult. :) > > The "WAL compression" tool contemplated > > previously would be much simpler and not the full-blown WAL replay > > capability, which would be left to the server, unless you're suggesting > > that even that should be exclusively the purview of the backend? Though > > that ship's already sailed, given that external projects have > > implemented it. > > I'm extremely doubtful of such tools (but it's not what I was responding > too, see above). I'd be extremely surprised if even one of them came > close to being correct. The old FPI removal tool had data corrupting > bugs left and right. I have concerns about it myself, which is why I'd actually really like to see something in core that does it, and does it the right way, that other projects could then leverage (ideally by just linking into the library without having to rewrite what's in core, though that might not be an option for things like WAL-G that are in Go and possibly don't want to link in some C library). > > Having a library to provide that which external > > projects could leverage would be nicer than having everyone write their > > own version. > > No, I don't think that's necessarily true. Something complicated that's > hard to get right doesn't have to be provided by core. Even if other > projects decide that their risk/reward assesment is different than core > postgres'. We don't have to take on all kind of work and complexity for > external tools. No, it doesn't have to be provided by core, but I sure would like it to be and I'd be much more comfortable if it was because then we'd also take care to not break whatever assumptions are made (or to do so in a way that can be detected and/or handled) as new code is written. As discussed above, as long as it isn't provided by core, it's not going to be trusted, likely will have bugs, and probably will be broken by things happening in core moving forward. The only option left is "well, we just won't have that capability at all". Maybe that's what you're getting at here, but not sure I agree with that as the result. Thanks, Stephen
Attachment
pgsql-hackers by date: