testdays-web: create automated tests #89
No reviewers
Labels
No labels
ai-review-please
Closed As
Duplicate
Closed As
Fixed
Closed As
Invalid
easyfix
enhancement
Backlog Status
Needs Review
Backlog Status
Ready
chore
documentation
points
01
points
02
points
03
points
05
points
08
points
13
Priority
Critical
Priority
High
Priority
Low
Priority
Medium
Sprint Status
Blocked
Sprint Status
Done
Sprint Status
In Progress
Sprint Status
Review
Sprint Status
To Do
Technical Debt
Work Item
Bug
Work Item
Epic
Work Item
Spike
Work Item
Task
Work Item
User Story
No milestone
No project
No assignees
3 participants
Notifications
Due date
No due date set.
Blocks
#76 Create automated tests
quality/testdays-web
Reference
quality/testdays-web!89
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "feature/76-create-automated-tests"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
This PR adds complete pytest configuration and fixtures ready for testing testdays-web app.
Also some test examples are added but total test count is low to keep PR size manageable.
It is expected that more tests will be created continually over time, also any new feature now should be created including its own tests.
Fixes #76
Ready for review
Create automated teststo testdays-web: create automated testsI think we should stick with having issues in the board rather than the PRs that fix them; we should only put PRs on the board if there's no issue (which should be rare). So I took this off the board and put the issue on it instead.
Some random thoughts while going through the changes:
DEVELOPER.mdwith instructions on how to run the test suite?make testsor similarpytest, most of the tests fail, with an error similar to this:Also, just FYI, if you want some inspiration, we already have pytest configuration in blockerbugs (setup.cfg, conftest.py). I haven't touched pytest in a long time, so it'll will take some time for me to remember it and be able to compare it. But one difference I see is we seem to be using sqlite in memory for testing and avoid complications with containers. (That should be fine, because if we use it just for runtime and don't upgrade the schema with alembic, we haven't encountered any differences between sqlite and postgres, IIRC). I'm not saying it has be to be done this way, just noting the difference.
Turns out podman socket must be enabled for testcontainers module to work. This removes the docker-related errors:
I still see some test suite errors, but not related to podman:
Those test failures are actually legit and point to a problem with User model instantiation and use.
I will file a bug ticket once this PR is finished and merged.
I didn't try to understand everything, but I have at least a few questions and remarks. The test suite runs, and the initial selection of unit tests seems fine, ready to be extended. If we improve the documentation a little bit, and ideally make it work without extra steps (e.g. start the podman service automatically if needed), it should be ready to go. The main
conftest.pyseems quite complex, compares to my experience from other projects, but I trust your judgement here. The important thing is that it runs and we finally have at least a few initial tests :)@ -0,0 +17,4 @@from alembic.config import Config as AlembicConfigALEMBIC_CONFIG_PATH = Path.cwd() / "alembic.ini"TEST_SETTINGS_PATH = Path.cwd() / "conf" / "test_settings.py"This is the first time I see a path constructed like this, feels very unpythonic :-D Usually I use
os.path.join().This code is based on
pathlibmodule which is actually considered to be a successor toos.path. It packs some more niceties compared toos.path:)@ -0,0 +45,4 @@# Creating minimal startup configuration fileTEST_SETTINGS_PATH.parent.mkdir(parents=True, exist_ok=True)with open(TEST_SETTINGS_PATH, "w", encoding="utf-8") as temp_settings_file:I wonder if we can provide these values as the default values for the test config profile (in
config.py) and avoid creatingtest_settings.pycompletely? I think we're doing something like that in Blockerbugs.The main reason for using settings file is the containerized database URL which is dynamic (at least port number is) and could change between test runs.
@ -0,0 +121,4 @@None: Cleanup in finally block removes the settings file"""# Create final settings file for tested appwith open(TEST_SETTINGS_PATH, "w", encoding="utf-8") as settings_file:I think this is the same as above - instead of creating and loading the config file, we could just initialize the Config instance with the right values?
@ -0,0 +219,4 @@saved_loggers_state[logger_name] = logger_obj.disabled# Create all tables and update them to the latest schemaalembic_cmd.upgrade(alembic_cfg, "head")I might be confused, but I thought that when you start a Flask+SQLAlchemy app against a completely empty database, it creates the current (the very latest) db schema, and therefore no migrations are necessary. Does it work differently with
testcontainers?I discussed the very same problem with Claude before, heres the gist of what I learned:
While it is possible to create db schema using SQLAlchemy db instance method .create_all(), using alembic does have several advantages:
Ok, that sounds reasonable. But I'm still unclear on the migration steps. So when the test container is empty and you create the initial tables, you don't create it in the final (that's "head", right?) configuration, but some older one, and migrate it? Where does it get this older version db specification from? I don't see it in the code.
Currently it seems to me that we call "upgrade", but since the database it empty, I think it just uses the current schema, creates it in the final configuration immediately, and therefore there's no migration done anyway. Am I wrong?
The only way to test migration in this way would be if you had an older test container, and new migration steps were added, then, just once, the migration would be performed. But if you clean the db (
make clean-test) regularly, this will never happen. So I don't think this is really testing migrations, not explicitly.I don't want to dwell on this, it's working and if this is equivalent to create_all(), then no harm actually done. Feel free to push it this way. I'm just trying to understand it.
FWIW: Both Claude and Gemini claim this starts with the first revision and upgrades it through every migration towards HEAD. So it really seems to check the migrations, as claimed.
testdays-web: create automated teststo WIP: testdays-web: create automated testseb719d4d76to3c0dbe1783WIP: testdays-web: create automated teststo testdays-web: create automated testsReady for review
@ -0,0 +113,4 @@# Check if podman socket exists, enable it if it doesn'tpodman_socket_path = Path(xdg_runtime_dir) / "podman" / "podman.sock"if not podman_socket_path.exists():Turns out my DOCKER_HOST variable is populated, even if I stop the podman socket. And not only that,
/run/user/1000/podman/podman.sockalso exists. And so it never reaches the code to start the service again.Maybe we should run
systemctl --user is-active podman.socketto check it instead?To reproduce:
@ -0,0 +115,4 @@podman_socket_path = Path(xdg_runtime_dir) / "podman" / "podman.sock"if not podman_socket_path.exists():LOGGER.info("Podman socket not found, enabling podman.socket service")systemctl_cmd = ["systemctl", "--user", "enable", "--now", "podman.socket"]This not only starts the service, but also enables it permanently, even after reboot. Is it intentional? I guess there might be some security or other reasons why it's not enabled by default, but I haven't really looked into that much. Either way, why not just start the service, but avoid enabling it permanently? I.e. just running
systemctl --user start podman.socket. That should be good enough for us, no?testdays-web: create automated teststo WIP: testdays-web: create automated testsWIP: testdays-web: create automated teststo testdays-web: create automated testsReady for review
This works for me now, thanks for working on it!
I have one further recommendation - either allow
make testto pass arguments topytest, or document inDEVELOPER.mdhow to run it manually with optional arguments (RUNMODE=test pytest <arguments>).Pytest arguments are very useful when debugging e.g. only a particular file/method, or when using
-x,--captureand similar options.It can be a part of this PR, or some future commit, it doesn't really matter, but I think at least adding the one or two sentences to the documentation will be helpful for future ourselves (and others) 😉
testdays-web: create automated teststo WIP: testdays-web: create automated testsThis enhancement sounds very useful indeed. Extending PR code with this modification.
WIP: testdays-web: create automated teststo testdays-web: create automated testsAwesome, works great. Thanks a lot.
09775730aftoad5a93acbfad5a93acbftodb3c8345d6