testdays-web: create automated tests #89

Merged
jgroman merged 1 commit from feature/76-create-automated-tests into develop 2026-03-06 14:41:05 +00:00
Owner

This PR adds complete pytest configuration and fixtures ready for testing testdays-web app.
Also some test examples are added but total test count is low to keep PR size manageable.
It is expected that more tests will be created continually over time, also any new feature now should be created including its own tests.

Fixes #76

This PR adds complete pytest configuration and fixtures ready for testing testdays-web app. Also some test examples are added but total test count is low to keep PR size manageable. It is expected that more tests will be created continually over time, also any new feature now should be created including its own tests. Fixes #76
Author
Owner

Ready for review

Ready for review
kparal requested review from kparal 2026-01-15 12:17:48 +00:00
kparal self-assigned this 2026-01-15 12:19:18 +00:00
kparal added this to the Sprint 1 project 2026-01-15 12:24:11 +00:00
adamwill modified the project from Sprint 1 to Sprint 2 2026-01-27 17:26:33 +00:00
adamwill changed title from Create automated tests to testdays-web: create automated tests 2026-01-27 17:26:43 +00:00
adamwill removed this from the Sprint 2 project 2026-01-27 18:07:23 +00:00
Owner

I think we should stick with having issues in the board rather than the PRs that fix them; we should only put PRs on the board if there's no issue (which should be rare). So I took this off the board and put the issue on it instead.

I think we should stick with having issues in the board rather than the PRs that fix them; we should only put PRs on the board if there's no issue (which should be rare). So I took this off the board and put the issue on it instead.
Owner

Some random thoughts while going through the changes:

  • AI-generated files should probably have a docstring at to top saying that they are generated
  • Can you extend DEVELOPER.md with instructions on how to run the test suite?
  • It would be nice if the test suite could be run through make tests or similar
  • Does this depend on docker? I see some code to set it up for podman perhaps? If I naively run just pytest, most of the tests fail, with an error similar to this:
E           docker.errors.DockerException: Error while fetching server API version: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))

../../venv/testdays-web/lib/python3.14/site-packages/docker/api/client.py:230: DockerException
======================================================== short test summary info =========================================================
ERROR tests/functional/test_main.py::test_index_page_loads - docker.errors.DockerException: Error while fetching server API version: ('Connection aborted.', FileNotFoundError(2, 'No such file or...
Some random thoughts while going through the changes: * AI-generated files should probably have a docstring at to top saying that they are generated * Can you extend `DEVELOPER.md` with instructions on how to run the test suite? * It would be nice if the test suite could be run through `make tests` or similar * Does this depend on docker? I see some code to set it up for podman perhaps? If I naively run just `pytest`, most of the tests fail, with an error similar to this: ``` E docker.errors.DockerException: Error while fetching server API version: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory')) ../../venv/testdays-web/lib/python3.14/site-packages/docker/api/client.py:230: DockerException ======================================================== short test summary info ========================================================= ERROR tests/functional/test_main.py::test_index_page_loads - docker.errors.DockerException: Error while fetching server API version: ('Connection aborted.', FileNotFoundError(2, 'No such file or... ```
Owner

Also, just FYI, if you want some inspiration, we already have pytest configuration in blockerbugs (setup.cfg, conftest.py). I haven't touched pytest in a long time, so it'll will take some time for me to remember it and be able to compare it. But one difference I see is we seem to be using sqlite in memory for testing and avoid complications with containers. (That should be fine, because if we use it just for runtime and don't upgrade the schema with alembic, we haven't encountered any differences between sqlite and postgres, IIRC). I'm not saying it has be to be done this way, just noting the difference.

Also, just FYI, if you want some inspiration, we already have pytest configuration in blockerbugs ([setup.cfg](https://forge.fedoraproject.org/quality/blockerbugs/src/branch/develop/setup.cfg), [conftest.py](https://forge.fedoraproject.org/quality/blockerbugs/src/branch/develop/testing/conftest.py)). I haven't touched pytest in a long time, so it'll will take some time for me to remember it and be able to compare it. But one difference I see is we seem to be using sqlite in memory for testing and avoid complications with containers. (That should be fine, because if we use it just for runtime and don't upgrade the schema with alembic, we haven't encountered any differences between sqlite and postgres, IIRC). I'm not saying it has be to be done this way, just noting the difference.
Owner

Turns out podman socket must be enabled for testcontainers module to work. This removes the docker-related errors:

systemctl --user enable --now podman.socket

I still see some test suite errors, but not related to podman:

================================================================ FAILURES ================================================================
___________________________________________ TestOIDCLogin.test_empty_string_fields_are_skipped ___________________________________________

self = <tests.functional.test_oidc_login.TestOIDCLogin object at 0x7f00c5f71c50>, app = <Flask 'testdays'>
client = <FlaskClient <Flask 'testdays'>>, db_session = <sqlalchemy.orm.scoping.scoped_session object at 0x7f00c5b48b90>

    def test_empty_string_fields_are_skipped(self, app, client, db_session):
        """
        GIVEN an OIDC profile with empty string fields
        WHEN a user logs in
        THEN empty strings should be skipped in the fallback chain
        """
        oidc_profile = {
            'sub': 'empty-field-user',
            'name': '',  # Empty string should be skipped
            'nickname': 'validnick',
            'preferred_username': '',  # Empty string should be skipped
            'email': 'valid@example.com'
        }
    
        set_oidc_profile(app, oidc_profile)
    
        response = client.get('/oidc_login', follow_redirects=False)
    
        assert response.status_code == 302
    
        user = db_session.query(User).filter_by(sub='fedora:empty-field-user').first()
        assert user is not None
        # Should use nickname since name is empty
        assert user.displayname == 'validnick'
        # Should use email since preferred_username is empty
>       assert user.username == 'valid@example.com'
E       AssertionError: assert 'validnick' == 'valid@example.com'
E         
E         - valid@example.com
E         + validnick

tests/functional/test_oidc_login.py:308: AssertionError
---------------------------------------------------------- Captured stdout call ----------------------------------------------------------
{'email': 'valid@example.com',
 'name': '',
 'nickname': 'validnick',
 'preferred_username': '',
 'sub': 'empty-field-user'}
15 fedora:empty-field-user validnick
_________________________________________ TestOIDCLogin.test_whitespace_only_fields_are_skipped __________________________________________

self = <tests.functional.test_oidc_login.TestOIDCLogin object at 0x7f00c5fcc230>, app = <Flask 'testdays'>
client = <FlaskClient <Flask 'testdays'>>, db_session = <sqlalchemy.orm.scoping.scoped_session object at 0x7f00c5b48b90>

    def test_whitespace_only_fields_are_skipped(self, app, client, db_session):
        """
        GIVEN an OIDC profile with whitespace-only fields
        WHEN a user logs in
        THEN whitespace-only strings should be skipped in the fallback chain
        """
        oidc_profile = {
            'sub': 'whitespace-user',
            'name': '   ',  # Whitespace should be skipped
            'nickname': 'validnick',
            'preferred_username': ' \t\n ',  # Whitespace should be skipped
            'email': 'valid@example.com'
        }
    
        set_oidc_profile(app, oidc_profile)
    
        response = client.get('/oidc_login', follow_redirects=False)
    
        assert response.status_code == 302
    
        user = db_session.query(User).filter_by(sub='fedora:whitespace-user').first()
        assert user is not None
        # Should use nickname since name is whitespace
        assert user.displayname == 'validnick'
        # Should use email since preferred_username is whitespace
>       assert user.username == 'valid@example.com'
E       AssertionError: assert 'validnick' == 'valid@example.com'
E         
E         - valid@example.com
E         + validnick

tests/functional/test_oidc_login.py:335: AssertionError
---------------------------------------------------------- Captured stdout call ----------------------------------------------------------
{'email': 'valid@example.com',
 'name': '   ',
 'nickname': 'validnick',
 'preferred_username': ' \t\n ',
 'sub': 'whitespace-user'}
16 fedora:whitespace-user validnick
______________________________________________________ TestUserModel.test_new_user _______________________________________________________

self = <tests.unit.test_models.TestUserModel object at 0x7f00c5f89450>
mock_user = MockUser(sub='test_sub', username='testuser', displayname='Test User')

    def test_new_user(self, mock_user):
        """
        GIVEN a User model
        WHEN a new User is created with minimal parameters
        THEN check all fields (sub, username, displayname, role, disabled) are defined correctly
        """
        user = User(
            sub=mock_user.sub, username=mock_user.username, displayname=None, role=None
        )
        assert user.sub == mock_user.sub
        assert user.username == mock_user.username
        assert user.displayname == mock_user.username  # defaults to username when None
        assert user.role == UserRoles.none  # defaults to UserRoles.none
>       assert user.disabled is False  # defaults to False
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E       assert None is False
E        +  where None = <User test_sub, disabled:None, role: UserRoles.none>.disabled

tests/unit/test_models.py:44: AssertionError
_________________________________________________ TestUserModel.test_user_has_role_user __________________________________________________

self = <tests.unit.test_models.TestUserModel object at 0x7f00c6a4f530>
mock_user = MockUser(sub='test_sub', username='testuser', displayname='Test User')

    def test_user_has_role_user(self, mock_user):
        """
        GIVEN a User with user role
        WHEN has_role is called
        THEN it should return True only for matching role
        """
        regular_user = User(
            sub=mock_user.sub, username=mock_user.username, role=UserRoles.user
        )
>       assert regular_user.has_role(UserRoles.user.value) is True
E       AssertionError: assert False is True
E        +  where False = has_role('User')
E        +    where has_role = <User test_sub, disabled:None, role: UserRoles.user>.has_role
E        +    and   'User' = <UserRoles.user: 'User'>.value
E        +      where <UserRoles.user: 'User'> = UserRoles.user

tests/unit/test_models.py:93: AssertionError
________________________________________________ TestUserModel.test_user_has_role_creator ________________________________________________

self = <tests.unit.test_models.TestUserModel object at 0x7f00c8a9de10>
mock_user = MockUser(sub='test_sub', username='testuser', displayname='Test User')

    def test_user_has_role_creator(self, mock_user):
        """
        GIVEN a User with creator role
        WHEN has_role is called
        THEN it should return True only for matching role
        """
        creator_user = User(
            sub=mock_user.sub, username=mock_user.username, role=UserRoles.creator
        )
>       assert creator_user.has_role(UserRoles.creator.value) is True
E       AssertionError: assert False is True
E        +  where False = has_role('Creator')
E        +    where has_role = <User test_sub, disabled:None, role: UserRoles.creator>.has_role
E        +    and   'Creator' = <UserRoles.creator: 'Creator'>.value
E        +      where <UserRoles.creator: 'Creator'> = UserRoles.creator

tests/unit/test_models.py:106: AssertionError
_________________________________________________ TestUserModel.test_user_has_role_none __________________________________________________

self = <tests.unit.test_models.TestUserModel object at 0x7f00c5f27ce0>
mock_user = MockUser(sub='test_sub', username='testuser', displayname='Test User')

    def test_user_has_role_none(self, mock_user):
        """
        GIVEN a User with no role
        WHEN has_role is called
        THEN it should return False for all roles except none
        """
        none_user = User(
            sub=mock_user.sub, username=mock_user.username, role=UserRoles.none
        )
>       assert none_user.has_role(UserRoles.none.value) is True
E       AssertionError: assert False is True
E        +  where False = has_role('None')
E        +    where has_role = <User test_sub, disabled:None, role: UserRoles.none>.has_role
E        +    and   'None' = <UserRoles.none: 'None'>.value
E        +      where <UserRoles.none: 'None'> = UserRoles.none

tests/unit/test_models.py:119: AssertionError
________________________________________________ TestUserModel.test_user_disabled_default ________________________________________________

self = <tests.unit.test_models.TestUserModel object at 0x7f00c5fcd130>
mock_user = MockUser(sub='test_sub', username='testuser', displayname='Test User')

    def test_user_disabled_default(self, mock_user):
        """
        GIVEN a User model
        WHEN a new User is created
        THEN disabled should default to False
        """
        user = User(sub=mock_user.sub, username=mock_user.username)
>       assert user.disabled is False
E       assert None is False
E        +  where None = <User test_sub, disabled:None, role: UserRoles.none>.disabled

tests/unit/test_models.py:155: AssertionError
Turns out podman socket must be enabled for testcontainers module to work. This removes the docker-related errors: ``` systemctl --user enable --now podman.socket ``` I still see some test suite errors, but not related to podman: ``` ================================================================ FAILURES ================================================================ ___________________________________________ TestOIDCLogin.test_empty_string_fields_are_skipped ___________________________________________ self = <tests.functional.test_oidc_login.TestOIDCLogin object at 0x7f00c5f71c50>, app = <Flask 'testdays'> client = <FlaskClient <Flask 'testdays'>>, db_session = <sqlalchemy.orm.scoping.scoped_session object at 0x7f00c5b48b90> def test_empty_string_fields_are_skipped(self, app, client, db_session): """ GIVEN an OIDC profile with empty string fields WHEN a user logs in THEN empty strings should be skipped in the fallback chain """ oidc_profile = { 'sub': 'empty-field-user', 'name': '', # Empty string should be skipped 'nickname': 'validnick', 'preferred_username': '', # Empty string should be skipped 'email': 'valid@example.com' } set_oidc_profile(app, oidc_profile) response = client.get('/oidc_login', follow_redirects=False) assert response.status_code == 302 user = db_session.query(User).filter_by(sub='fedora:empty-field-user').first() assert user is not None # Should use nickname since name is empty assert user.displayname == 'validnick' # Should use email since preferred_username is empty > assert user.username == 'valid@example.com' E AssertionError: assert 'validnick' == 'valid@example.com' E E - valid@example.com E + validnick tests/functional/test_oidc_login.py:308: AssertionError ---------------------------------------------------------- Captured stdout call ---------------------------------------------------------- {'email': 'valid@example.com', 'name': '', 'nickname': 'validnick', 'preferred_username': '', 'sub': 'empty-field-user'} 15 fedora:empty-field-user validnick _________________________________________ TestOIDCLogin.test_whitespace_only_fields_are_skipped __________________________________________ self = <tests.functional.test_oidc_login.TestOIDCLogin object at 0x7f00c5fcc230>, app = <Flask 'testdays'> client = <FlaskClient <Flask 'testdays'>>, db_session = <sqlalchemy.orm.scoping.scoped_session object at 0x7f00c5b48b90> def test_whitespace_only_fields_are_skipped(self, app, client, db_session): """ GIVEN an OIDC profile with whitespace-only fields WHEN a user logs in THEN whitespace-only strings should be skipped in the fallback chain """ oidc_profile = { 'sub': 'whitespace-user', 'name': ' ', # Whitespace should be skipped 'nickname': 'validnick', 'preferred_username': ' \t\n ', # Whitespace should be skipped 'email': 'valid@example.com' } set_oidc_profile(app, oidc_profile) response = client.get('/oidc_login', follow_redirects=False) assert response.status_code == 302 user = db_session.query(User).filter_by(sub='fedora:whitespace-user').first() assert user is not None # Should use nickname since name is whitespace assert user.displayname == 'validnick' # Should use email since preferred_username is whitespace > assert user.username == 'valid@example.com' E AssertionError: assert 'validnick' == 'valid@example.com' E E - valid@example.com E + validnick tests/functional/test_oidc_login.py:335: AssertionError ---------------------------------------------------------- Captured stdout call ---------------------------------------------------------- {'email': 'valid@example.com', 'name': ' ', 'nickname': 'validnick', 'preferred_username': ' \t\n ', 'sub': 'whitespace-user'} 16 fedora:whitespace-user validnick ______________________________________________________ TestUserModel.test_new_user _______________________________________________________ self = <tests.unit.test_models.TestUserModel object at 0x7f00c5f89450> mock_user = MockUser(sub='test_sub', username='testuser', displayname='Test User') def test_new_user(self, mock_user): """ GIVEN a User model WHEN a new User is created with minimal parameters THEN check all fields (sub, username, displayname, role, disabled) are defined correctly """ user = User( sub=mock_user.sub, username=mock_user.username, displayname=None, role=None ) assert user.sub == mock_user.sub assert user.username == mock_user.username assert user.displayname == mock_user.username # defaults to username when None assert user.role == UserRoles.none # defaults to UserRoles.none > assert user.disabled is False # defaults to False ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E assert None is False E + where None = <User test_sub, disabled:None, role: UserRoles.none>.disabled tests/unit/test_models.py:44: AssertionError _________________________________________________ TestUserModel.test_user_has_role_user __________________________________________________ self = <tests.unit.test_models.TestUserModel object at 0x7f00c6a4f530> mock_user = MockUser(sub='test_sub', username='testuser', displayname='Test User') def test_user_has_role_user(self, mock_user): """ GIVEN a User with user role WHEN has_role is called THEN it should return True only for matching role """ regular_user = User( sub=mock_user.sub, username=mock_user.username, role=UserRoles.user ) > assert regular_user.has_role(UserRoles.user.value) is True E AssertionError: assert False is True E + where False = has_role('User') E + where has_role = <User test_sub, disabled:None, role: UserRoles.user>.has_role E + and 'User' = <UserRoles.user: 'User'>.value E + where <UserRoles.user: 'User'> = UserRoles.user tests/unit/test_models.py:93: AssertionError ________________________________________________ TestUserModel.test_user_has_role_creator ________________________________________________ self = <tests.unit.test_models.TestUserModel object at 0x7f00c8a9de10> mock_user = MockUser(sub='test_sub', username='testuser', displayname='Test User') def test_user_has_role_creator(self, mock_user): """ GIVEN a User with creator role WHEN has_role is called THEN it should return True only for matching role """ creator_user = User( sub=mock_user.sub, username=mock_user.username, role=UserRoles.creator ) > assert creator_user.has_role(UserRoles.creator.value) is True E AssertionError: assert False is True E + where False = has_role('Creator') E + where has_role = <User test_sub, disabled:None, role: UserRoles.creator>.has_role E + and 'Creator' = <UserRoles.creator: 'Creator'>.value E + where <UserRoles.creator: 'Creator'> = UserRoles.creator tests/unit/test_models.py:106: AssertionError _________________________________________________ TestUserModel.test_user_has_role_none __________________________________________________ self = <tests.unit.test_models.TestUserModel object at 0x7f00c5f27ce0> mock_user = MockUser(sub='test_sub', username='testuser', displayname='Test User') def test_user_has_role_none(self, mock_user): """ GIVEN a User with no role WHEN has_role is called THEN it should return False for all roles except none """ none_user = User( sub=mock_user.sub, username=mock_user.username, role=UserRoles.none ) > assert none_user.has_role(UserRoles.none.value) is True E AssertionError: assert False is True E + where False = has_role('None') E + where has_role = <User test_sub, disabled:None, role: UserRoles.none>.has_role E + and 'None' = <UserRoles.none: 'None'>.value E + where <UserRoles.none: 'None'> = UserRoles.none tests/unit/test_models.py:119: AssertionError ________________________________________________ TestUserModel.test_user_disabled_default ________________________________________________ self = <tests.unit.test_models.TestUserModel object at 0x7f00c5fcd130> mock_user = MockUser(sub='test_sub', username='testuser', displayname='Test User') def test_user_disabled_default(self, mock_user): """ GIVEN a User model WHEN a new User is created THEN disabled should default to False """ user = User(sub=mock_user.sub, username=mock_user.username) > assert user.disabled is False E assert None is False E + where None = <User test_sub, disabled:None, role: UserRoles.none>.disabled tests/unit/test_models.py:155: AssertionError ```
Author
Owner

Those test failures are actually legit and point to a problem with User model instantiation and use.
I will file a bug ticket once this PR is finished and merged.

Those test failures are actually legit and point to a problem with User model instantiation and use. I will file a bug ticket once this PR is finished and merged.
kparal left a comment
Owner

I didn't try to understand everything, but I have at least a few questions and remarks. The test suite runs, and the initial selection of unit tests seems fine, ready to be extended. If we improve the documentation a little bit, and ideally make it work without extra steps (e.g. start the podman service automatically if needed), it should be ready to go. The main conftest.py seems quite complex, compares to my experience from other projects, but I trust your judgement here. The important thing is that it runs and we finally have at least a few initial tests :)

I didn't try to understand everything, but I have at least a few questions and remarks. The test suite runs, and the initial selection of unit tests seems fine, ready to be extended. If we improve the documentation a little bit, and ideally make it work without extra steps (e.g. start the podman service automatically if needed), it should be ready to go. The main `conftest.py` seems quite complex, compares to my experience from other projects, but I trust your judgement here. The important thing is that it runs and we finally have at least a few initial tests :)
@ -0,0 +17,4 @@
from alembic.config import Config as AlembicConfig
ALEMBIC_CONFIG_PATH = Path.cwd() / "alembic.ini"
TEST_SETTINGS_PATH = Path.cwd() / "conf" / "test_settings.py"
Owner

This is the first time I see a path constructed like this, feels very unpythonic :-D Usually I use os.path.join().

This is the first time I see a path constructed like this, feels very unpythonic :-D Usually I use `os.path.join()`.
Author
Owner

This code is based on pathlib module which is actually considered to be a successor to os.path. It packs some more niceties compared to os.path :)

This code is based on `pathlib` module which is actually considered to be a successor to `os.path`. It packs some more niceties compared to `os.path` :)
kparal marked this conversation as resolved
@ -0,0 +45,4 @@
# Creating minimal startup configuration file
TEST_SETTINGS_PATH.parent.mkdir(parents=True, exist_ok=True)
with open(TEST_SETTINGS_PATH, "w", encoding="utf-8") as temp_settings_file:
Owner

I wonder if we can provide these values as the default values for the test config profile (in config.py) and avoid creating test_settings.py completely? I think we're doing something like that in Blockerbugs.

I wonder if we can provide these values as the default values for the test config profile (in `config.py`) and avoid creating `test_settings.py` completely? I think we're doing something like that in Blockerbugs.
Author
Owner

The main reason for using settings file is the containerized database URL which is dynamic (at least port number is) and could change between test runs.

The main reason for using settings file is the containerized database URL which is dynamic (at least port number is) and could change between test runs.
kparal marked this conversation as resolved
@ -0,0 +121,4 @@
None: Cleanup in finally block removes the settings file
"""
# Create final settings file for tested app
with open(TEST_SETTINGS_PATH, "w", encoding="utf-8") as settings_file:
Owner

I think this is the same as above - instead of creating and loading the config file, we could just initialize the Config instance with the right values?

I think this is the same as above - instead of creating and loading the config file, we could just initialize the Config instance with the right values?
kparal marked this conversation as resolved
@ -0,0 +219,4 @@
saved_loggers_state[logger_name] = logger_obj.disabled
# Create all tables and update them to the latest schema
alembic_cmd.upgrade(alembic_cfg, "head")
Owner

I might be confused, but I thought that when you start a Flask+SQLAlchemy app against a completely empty database, it creates the current (the very latest) db schema, and therefore no migrations are necessary. Does it work differently with testcontainers?

I might be confused, but I thought that when you start a Flask+SQLAlchemy app against a completely empty database, it creates the current (the very latest) db schema, and therefore no migrations are necessary. Does it work differently with `testcontainers`?
Author
Owner

I discussed the very same problem with Claude before, heres the gist of what I learned:

While it is possible to create db schema using SQLAlchemy db instance method .create_all(), using alembic does have several advantages:

  • It creates tables through the same migration path used in production
  • It properly tracks the schema version in the alembic_version table
  • Migrations are the source of truth for the schema structure
  • It tests that the migrations themselves work correctly
I discussed the very same problem with Claude before, heres the gist of what I learned: While it is possible to create db schema using SQLAlchemy db instance method .create_all(), using alembic does have several advantages: - It creates tables through the same migration path used in production - It properly tracks the schema version in the alembic_version table - Migrations are the source of truth for the schema structure - It tests that the migrations themselves work correctly
Owner

Ok, that sounds reasonable. But I'm still unclear on the migration steps. So when the test container is empty and you create the initial tables, you don't create it in the final (that's "head", right?) configuration, but some older one, and migrate it? Where does it get this older version db specification from? I don't see it in the code.

Currently it seems to me that we call "upgrade", but since the database it empty, I think it just uses the current schema, creates it in the final configuration immediately, and therefore there's no migration done anyway. Am I wrong?

The only way to test migration in this way would be if you had an older test container, and new migration steps were added, then, just once, the migration would be performed. But if you clean the db (make clean-test) regularly, this will never happen. So I don't think this is really testing migrations, not explicitly.

I don't want to dwell on this, it's working and if this is equivalent to create_all(), then no harm actually done. Feel free to push it this way. I'm just trying to understand it.

Ok, that sounds reasonable. But I'm still unclear on the migration steps. So when the test container is empty and you create the initial tables, you don't create it in the final (that's "head", right?) configuration, but some older one, and migrate it? Where does it get this older version db specification from? I don't see it in the code. Currently it seems to me that we call "upgrade", but since the database it empty, I think it just uses the current schema, creates it in the final configuration immediately, and therefore there's no migration done anyway. Am I wrong? The only way to test migration in this way would be if you had an older test container, and new migration steps were added, then, just once, the migration would be performed. But if you clean the db (`make clean-test`) regularly, this will never happen. So I don't think this is really testing migrations, not explicitly. I don't want to dwell on this, it's working and if this is equivalent to create_all(), then no harm actually done. Feel free to push it this way. I'm just trying to understand it.
Owner

FWIW: Both Claude and Gemini claim this starts with the first revision and upgrades it through every migration towards HEAD. So it really seems to check the migrations, as claimed.

FWIW: Both Claude and Gemini claim this starts with the first revision and upgrades it through every migration towards HEAD. So it really seems to check the migrations, as claimed.
kparal marked this conversation as resolved
Add Makfile test and clean targets
jgroman changed title from testdays-web: create automated tests to WIP: testdays-web: create automated tests 2026-02-09 08:29:05 +00:00
jgroman force-pushed feature/76-create-automated-tests from eb719d4d76 to 3c0dbe1783 2026-02-10 14:15:41 +00:00 Compare
jgroman changed title from WIP: testdays-web: create automated tests to testdays-web: create automated tests 2026-02-10 14:16:05 +00:00
Author
Owner

Ready for review

Ready for review
kparal requested changes 2026-02-26 12:57:38 +00:00
Dismissed
@ -0,0 +113,4 @@
# Check if podman socket exists, enable it if it doesn't
podman_socket_path = Path(xdg_runtime_dir) / "podman" / "podman.sock"
if not podman_socket_path.exists():
Owner

Turns out my DOCKER_HOST variable is populated, even if I stop the podman socket. And not only that, /run/user/1000/podman/podman.sock also exists. And so it never reaches the code to start the service again.
Maybe we should run systemctl --user is-active podman.socket to check it instead?

To reproduce:

  1. Start podman.socket
  2. Stop podman.socket
  3. make test
Turns out my DOCKER_HOST variable is populated, even if I stop the podman socket. And not only that, `/run/user/1000/podman/podman.sock` also exists. And so it never reaches the code to start the service again. Maybe we should run `systemctl --user is-active podman.socket` to check it instead? To reproduce: 1. Start podman.socket 2. Stop podman.socket 3. make test
kparal marked this conversation as resolved
@ -0,0 +115,4 @@
podman_socket_path = Path(xdg_runtime_dir) / "podman" / "podman.sock"
if not podman_socket_path.exists():
LOGGER.info("Podman socket not found, enabling podman.socket service")
systemctl_cmd = ["systemctl", "--user", "enable", "--now", "podman.socket"]
Owner

This not only starts the service, but also enables it permanently, even after reboot. Is it intentional? I guess there might be some security or other reasons why it's not enabled by default, but I haven't really looked into that much. Either way, why not just start the service, but avoid enabling it permanently? I.e. just running systemctl --user start podman.socket. That should be good enough for us, no?

This not only starts the service, but also enables it permanently, even after reboot. Is it intentional? I guess there might be some security or other reasons why it's not enabled by default, but I haven't really looked into that much. Either way, why not just _start_ the service, but avoid enabling it permanently? I.e. just running `systemctl --user start podman.socket`. That should be good enough for us, no?
kparal marked this conversation as resolved
jgroman changed title from testdays-web: create automated tests to WIP: testdays-web: create automated tests 2026-03-02 09:36:33 +00:00
jgroman changed title from WIP: testdays-web: create automated tests to testdays-web: create automated tests 2026-03-02 10:27:52 +00:00
Author
Owner

Ready for review

Ready for review
kparal approved these changes 2026-03-06 12:53:55 +00:00
Dismissed
kparal left a comment
Owner

This works for me now, thanks for working on it!

I have one further recommendation - either allow make test to pass arguments to pytest, or document in DEVELOPER.md how to run it manually with optional arguments (RUNMODE=test pytest <arguments>).

Pytest arguments are very useful when debugging e.g. only a particular file/method, or when using -x, --capture and similar options.

It can be a part of this PR, or some future commit, it doesn't really matter, but I think at least adding the one or two sentences to the documentation will be helpful for future ourselves (and others) 😉

This works for me now, thanks for working on it! I have one further recommendation - either allow `make test` to pass arguments to `pytest`, or document in `DEVELOPER.md` how to run it manually with optional arguments (`RUNMODE=test pytest <arguments>`). Pytest arguments are very useful when debugging e.g. only a particular file/method, or when using `-x`, `--capture` and similar options. It can be a part of this PR, or some future commit, it doesn't really matter, but I think at least adding the one or two sentences to the documentation will be helpful for future ourselves (and others) 😉
jgroman changed title from testdays-web: create automated tests to WIP: testdays-web: create automated tests 2026-03-06 13:26:55 +00:00
Author
Owner

This enhancement sounds very useful indeed. Extending PR code with this modification.

This enhancement sounds very useful indeed. Extending PR code with this modification.
jgroman changed title from WIP: testdays-web: create automated tests to testdays-web: create automated tests 2026-03-06 14:20:12 +00:00
kparal approved these changes 2026-03-06 14:30:20 +00:00
kparal left a comment
Owner

Awesome, works great. Thanks a lot.

Awesome, works great. Thanks a lot.
jgroman force-pushed feature/76-create-automated-tests from 09775730af to ad5a93acbf 2026-03-06 14:35:20 +00:00 Compare
jgroman force-pushed feature/76-create-automated-tests from ad5a93acbf to db3c8345d6 2026-03-06 14:36:25 +00:00 Compare
jgroman merged commit db3c8345d6 into develop 2026-03-06 14:41:05 +00:00
jgroman deleted branch feature/76-create-automated-tests 2026-03-06 14:41:05 +00:00
Sign in to join this conversation.
No reviewers
No milestone
No project
No assignees
3 participants
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Blocks
#76 Create automated tests
quality/testdays-web
Reference
quality/testdays-web!89
No description provided.