RFC: adding pytest as a supported test framework - Mailing list pgsql-hackers
From | Jacob Champion |
---|---|
Subject | RFC: adding pytest as a supported test framework |
Date | |
Msg-id | CAOYmi+kThkM9Z87u=R_Wi7fCor2i+UZKAyq0UCyprzCwTQvqgA@mail.gmail.com Whole thread Raw |
Responses |
Re: RFC: adding pytest as a supported test framework
Re: RFC: adding pytest as a supported test framework Re: RFC: adding pytest as a supported test framework |
List | pgsql-hackers |
Hi all, For the v18 cycle, I would like to try to get pytest [1] in as a supported test driver, in addition to the current offerings. (I'm tempted to end the email there.) We had an unconference session at PGConf.dev [2] around this topic. There seemed to be a number of nodding heads and some growing momentum. So here's a thread to try to build wider consensus. If you have a competing or complementary test proposal in mind, heads up! == Problem Statement(s) == 1. We'd like to rerun a failing test by itself. 2. It'd be helpful to see _what_ failed without poring over logs. These two got the most nodding heads of the points I presented. (#1 received tongue-in-cheek applause.) I think most modern test frameworks are going to give you these things, but unfortunately we don't have them. Additionally, 3. Many would like to use modern developer tooling during testing (language servers! autocomplete! debuggers! type checking!) and we can't right now. 4. It'd be great to split apart client-side tests from server-side tests. Driving Postgres via psql all the time is fine for acceptance testing, but it becomes a big problem when you need to test how clients talk to servers with incompatible feature sets, or how a peer behaves when talking to something buggy. 5. Personally, I want to implement security features test-first (for high code coverage and other benefits), and our Perl-based tests are usually too cumbersome for that. == Why pytest? == From the small and biased sample at the unconference session, it looks like a number of people have independently settled on pytest in their own projects. In my opinion, pytest occupies a nice space where it solves some of the above problems for us, and it gives us plenty of tools to solve the other problems without too much pain. Problem 1 (rerun failing tests): One architectural roadblock to this in our Test::More suite is that tests depend on setup that's done by previous tests. pytest allows you to declare each test's setup requirements via pytest fixtures, letting the test runner build up the world exactly as it needs to be for a single isolated test. These fixtures may be given a "scope" so that multiple tests may share the same setup for performance or other reasons. Problem 2 (seeing what failed): pytest does this via assertion introspection and very detailed failure reporting. If you haven't seen this before, take a look at the pytest homepage [1]; there's an example of a full log. Problem 3 (modern tooling): We get this from Python's very active developer base. Problems 4 (splitting client and server tests) and 5 (making it easier to write tests first) aren't really Python- or pytest-specific, but I have done both quite happily in my OAuth work [3], and I've since adapted that suite multiple times to develop and test other proposals on this list, like LDAP/SCRAM, client encryption, direct SSL, and compression. Python's standard library has lots of power by itself, with very good documentation. And virtualenvs and better package tooling have made it much easier, IMO, to avoid the XKCD dependency tangle [4] of the 2010s. When it comes to third-party packages, which I think we're probably going to want in moderation, we would still need to discuss supply chain safety. Python is not as mature here as, say, Go. == A Plan == Even if everyone were on board immediately, there's a lot of work to do. I'd like to add pytest in a more probationary status, so we can iron out the inevitable wrinkles. My proposal would be: 1. Commit bare-bones support in our Meson setup for running pytest, so everyone can kick the tires independently. 2. Add a test for something that we can't currently exercise. 3. Port a test from a place where the maintenance is terrible, to see if we can improve it. If we hate it by that point, no harm done; tear it back out. Otherwise we keep rolling forward. Thoughts? Suggestions? Thanks, --Jacob [1] https://docs.pytest.org/ [2] https://wiki.postgresql.org/wiki/PGConf.dev_2024_Developer_Unconference#New_testing_frameworks [3] https://github.com/jchampio/pg-pytest-suite [4] https://xkcd.com/1987/
pgsql-hackers by date: