On 10/17/22 7:03 AM, Magnus Hagander wrote:
>
>
> On Mon, Oct 17, 2022 at 10:40 AM Dave Page <dpage@pgadmin.org
> <mailto:dpage@pgadmin.org>> wrote:
>
>
>
> On Mon, 17 Oct 2022 at 09:32, Daniel Gustafsson <daniel@yesql.se
> <mailto:daniel@yesql.se>> wrote:
>
> > On 17 Oct 2022, at 10:29, Dave Page <dpage@pgadmin.org
> <mailto:dpage@pgadmin.org>> wrote:
>
> > A couple of possible solutions that spring to mind:
> >
> > 1) Jonathan (or whoever is handling the release process)
> could ensure those pages are updated as part of the release
> push, but that would require confirmation from Sandeep or
> someone on the EDB team that the packages have been published
> and everything looks good.
FWIW, before the release I do a general sweep of the entire website to
update content. The challenge with the downloads page is what's been
stated a few times, i.e. it's in an indeterminate state until everyone
confirms that the packages are ready.
Also, the release coordinator does post to -packagers providing guidance
on the release date. An interim manual step would be for a packager to
reply when the packages are ready for a major release (and I can also
repeatedly ping the list). I can then ensure the content is updated on
the website for GA.
> > 2) We could database-ise the data in those tables, and then
> Sandeep could update that through the Django admin interface at
> the appropriate time. He does have access to a limited part of
> the admin interface already.
We have this in effect already, s/database/update the website/. I think
it just adds a nicer interface to how we'd update the website. So +0 on
this.
> 3) EDB publish an API endpoint with the available releases that
> pg.org <http://pg.org> consumes
> and use to create the page?
>
>
> That could also work, though I suspect it might be less than easy
> for me to get someone on the right team to build that any time soon.
>
>
> Doesn't have to be an API of course. Just a static json file for
> example, which we could import at regular intervals. Similar to how we
> get the data for the yum and apt repos for example, just pulling it in
> somewhere else. We wouldn't want to poll something all the time and do
> it "live", just sync up at regular intervals.
I'm OK with this approach.
Thanks,
Jonathan