Re: backup manifests - Mailing list pgsql-hackers

From Robert Haas
Subject Re: backup manifests
Date
Msg-id CA+TgmoYdEEtO9HEPdDjV8DUT+=haZf_3bsMEuVgkOWTjt+PSKA@mail.gmail.com
Whole thread Raw
In response to Re: backup manifests  (David Steele <david@pgmasters.net>)
Responses Re: backup manifests  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-hackers
On Thu, Jan 9, 2020 at 8:19 PM David Steele <david@pgmasters.net> wrote:
> For example, have you considered what will happen if you have a file in
> the cluster with a tab in the name?  This is perfectly valid in Posix
> filesystems, at least.

Yeah, there's code for that in the patch I posted. I don't think the
validator patch deals with it, but that's fixable.

> You may already be escaping tabs but the simple
> code snippet you provided earlier isn't going to work so well either
> way.  It gets complicated quickly.

Sure, but obviously neither of those code snippets were intended to be
used straight out of the box. Even after you parse the manifest as
JSON, you would still - if you really want to validate it - check that
you have the keys and values you expect, that the individual field
values are sensible, etc. I still stand by my earlier contention that,
as things stand today, you can parse an ad-hoc format in less code
than a JSON format. If we had a JSON parser available on the front
end, I think it'd be roughly comparable, but maybe the JSON format
would come out a bit ahead. Not sure.

> There are a few MIT-licensed JSON projects that are implemented in a
> single file.  cJSON is very capable while JSMN is very minimal. Is is
> possible that one of those (or something like it) would be acceptable?
> It looks like the one requirement we have is that the JSON can be
> streamed rather than just building up one big blob?  Even with that
> requirement there are a few tricks that can be used.  JSON nests rather
> nicely after all so the individual file records can be transmitted
> independently of the overall file format.

I haven't really looked at these. I would have expected that including
a second JSON parser in core would provoke significant opposition.
Generally, people dislike having more than one piece of code to do the
same thing. I would also expect that depending on an external package
would provoke significant opposition. If we suck the code into core,
then we have to keep it up to date with the upstream, which is a
significant maintenance burden - look at all the time Tom has spent on
snowball, regex, and time zone code over the years. If we don't suck
the code into core but depend on it, then every developer needs to
have that package installed on their operating system, and every
packager has to make sure that it is being built for their OS so that
PostgreSQL can depend on it. Perhaps JSON is so popular today that
imposing such a requirement would provoke only a groundswell of
support, but based on past precedent I would assume that if I
committed a patch of this sort the chances that I'd have to revert it
would be about 99.9%. Optional dependencies for optional features are
usually pretty well-tolerated when they're clearly necessary: e.g. you
can't really do JIT without depending on something like LLVM, but the
bar for a mandatory dependency has historically been quite high.

> Would it be acceptable to bring in JSON code with a compatible license
> to use in libcommon?  If so I'm willing to help adapt that code for use
> in Postgres.  It's possible that the pgBackRest code could be adapted
> similarly, but it might make more sense to start from one of these
> general purpose parsers.

For the reasons above, I expect this approach would be rejected, by
Tom and by others.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



pgsql-hackers by date:

Previous
From: Julien Rouhaud
Date:
Subject: Re: Add pg_file_sync() to adminpack
Next
From: Tom Lane
Date:
Subject: Re: our checks for read-only queries are not great