CLI for interacting with Fedora wiki release validation pages (https://fedoraproject.org/wiki/Wikitcms)
  • Python 92.9%
  • HTML 6.7%
  • Shell 0.4%
Find a file
Adam Williamson 5dc4e66ff6
All checks were successful
CI via Tox / tox (pull_request) Successful in 1m46s
Add an ai-code-review context file
Signed-off-by: Adam Williamson <awilliam@redhat.com>
Assisted-by: Cursor 2.6.11 | claude-4.6-opus-high
2026-03-05 18:23:07 -08:00
.ai_review Add an ai-code-review context file 2026-03-05 18:23:07 -08:00
.forgejo/workflows Switch to using the reusable AI workflow 2026-02-26 13:34:18 -08:00
src/relval Make new black happy 2026-03-05 18:22:56 -08:00
tests Make new black happy 2026-03-05 18:22:56 -08:00
.gitignore Update build system for modern times, add tox config 2022-11-03 15:16:25 -07:00
CHANGELOG.md CHANGELOG: update most recent release link to Forgejo 2026-01-23 14:49:03 -08:00
ci.requires Run black on all source files, add it to CI 2022-11-03 15:16:29 -07:00
COPYING rename user-stats while I still can 2014-10-13 21:27:05 -07:00
entry-points.txt Update build system for modern times, add tox config 2022-11-03 15:16:25 -07:00
install.requires Add jinja2 to requirements 2025-03-12 11:19:22 -07:00
MANIFEST.in Update build system for modern times, add tox config 2022-11-03 15:16:25 -07:00
pyproject.toml Update docs, comments etc. for forgejo migration 2026-01-09 14:46:16 -08:00
README.md Er, fix the README links a bit harder 2026-02-04 08:47:38 -08:00
release.sh Update release.sh to refer to Forgejo not Pagure 2026-01-09 18:41:17 -08:00
run-relval.py Prefer local over system-wide in run-relval.py 2022-12-14 09:20:39 -08:00
setup.py Update docs, comments etc. for forgejo migration 2026-01-09 14:46:16 -08:00
tests.requires Update build system for modern times, add tox config 2022-11-03 15:16:25 -07:00
tox.ini Convert CI to Forgejo, add AI review, update Tox python versions 2026-01-09 14:31:39 -08:00

relval

relval is a CLI tool, using python-wikitcms, which aids in the creation of the wiki pages that are used to track the results of Fedora release validation events, in creating statistics like the heroes of Fedora testing and test coverage statistics, and also can report test results. If you're interested in relval, you may also be interested in testdays, which is to Test Day pages as relval is to release validation pages.

Put simply, you can run relval compose --release 25 --milestone Final --compose 1.1 and all the wiki pages that are needed for the Fedora 25 Final 1.1 release validation test event will be created (if they weren't already). The user-stats and testcase-stats sub-commands handle statistics generation, and report-results can report test results.

Installation and use

relval is packaged in the official Fedora and EPEL 7+ repositories: to install on Fedora run dnf install relval, on RHEL / CentOS with EPEL enabled, run yum install relval. You may need to enable the updates-testing repository to get the latest version. To install on other distributions, you can run python setup.py install.

You can visit the relval project page on Fedora Forge, and clone with git clone https://forge.fedoraproject.org/quality/relval.git. Tarballs are also available.

You can use the relval CLI from the tarball without installing it, as ./run-relval.py from the root of the tarball. You will need all its dependencies, which are listed in setup.py.

Bugs, pull requests etc.

You can file issues and pull requests on Fedora Forge.

ANY USE OF AI/LLM IN THE PRODUCTION OF A PULL REQUEST MUST BE CLEARLY DISCLOSED. This is for legal reasons, as the copyrightability of LLM-generated code is (as of July 2025) disputed and unclear. This is also for technical reasons, as reviewers may need to look out for different problems when reviewing LLM-generated code, compared to human-generated code.

Please include an Assisted-by line in the commit message, specifying the model, tool and/or service used in creating the pull request, and a more detailed explanation of how LLM technologies were used in the pull request description. This can be copied/pasted from PR to PR if the workflow remains the same.

Here is a sample commit message:

Make the frobnosticator reticulate splines better

By rejigging the frobnosticator, we can reticulate splines twice as fast!

Signed-off-by: Bob Roberts <bob@example.com>
Assisted-by: Google Gemini Pro 2.5

Pull requests must be signed off (use the -s git argument). By signing off your pull request you are agreeing to the Developer's Certificate of Origin:

Developer's Certificate of Origin 1.1

By making a contribution to this project, I certify that:

(a) The contribution was created in whole or in part by me and I
    have the right to submit it under the open source license
    indicated in the file; or

(b) The contribution is based upon previous work that, to the best
    of my knowledge, is covered under an appropriate open source
    license and I have the right under that license to submit that
    work with modifications, whether created in whole or in part
    by me, under the same open source license (unless I am
    permitted to submit under a different license), as indicated
    in the file; or

(c) The contribution was provided directly to me by some other
    person who certified (a), (b) or (c) and I have not modified
    it.

(d) I understand and agree that this project and the contribution
    are public and that a record of the contribution (including all
    personal information I submit with it, including my sign-off) is
    maintained indefinitely and may be redistributed consistent with
    this project or the open source license(s) involved.

Note that you must be able to agree to ALL PROVISIONS of the DCO for LLM-assisted contributions. The exact implications of this are left intentionally unclear as we are not lawyers. We strongly recommend that all pull requests include at least some element of human contribution and that you carefully review LLM-produced or assisted code for copyright issues.

Usage

The validation event SOP provides the correct invocation of relval to use when you simply wish to create the pages for a new compose (the most common use case).

User authentication (for commands requiring login)

The following applies for all commands that require login - anything that writes to the wiki, currently compose, report-results, and size-check.

Since early 2018, the Fedora wikis use OpenID Connect-based authentication. When you first use any of the commands that require login, a browser window will open and walk you through the authentication process; this will create a login token that is valid for a while, and subsequent use of these commands will work transparently. After a while the token will expire, and the next time you try to use one of these commands, you will go through the authentication process again.

The old --username and --password arguments, and the ~/.fedora/credentials file which used to be available for you to store your username and password for 'non-interactive' login, no longer do anything. It would be a good idea to remove any remaining credentials files as they are now only a potential security risk. For long-term non-interactive usage of the wiki via relval or any other system, you must request a permanent auth token from the wiki administrators.

Common options

All sub-commands honor the option --test, to operate on the staging wiki instead of the production wiki, which can be useful for testing. Please use this option if you are experimenting with the result page creation or result reporting sub-commands, especially if you also pass --force.

All options mentioned here have short names (e.g. -r for --release, but the long names are given here for clarity. Usually the short name is the first letter of the long name. The help pages (relval <sub-command> -h) list all options with both their long and short names.

compose

For validation event page creation, use the compose sub-command: relval compose. You must pass either the parameters --milestone and --compose (and optionally --release) or the parameter --cid to identify the compose for which pages will be created. When using --milestone and --compose you may also pass --release to specify the release to operate on; otherwise, relval will attempt to discover the 'next' release, and use that.

You may pass --testtype to specify a particular 'test type' (e.g. Base or Desktop); if you pass a test type, only the page for that type (and the summary page and category pages) will be written, while if you do not, the pages for all test types will be written. You may pass --no-current to specify that the Test_Results:Current redirect pages should not be updated to point to the newly-created pages (by default, they will). You may pass --force to force the creation of pages that already exist: this applies to the results pages category page contents, and summary page, but not to the Current redirects, which will always be written if page creation succeeds (unless --no-current is passed). You may pass --download-only to specify that only the Download template (which provides the table included in the instructions section of all the results pages) should be written; this is handy if you need to create or update the Download page for an existing event.

user-stats

For user statistics generation, use relval user-stats. It has no required options.

You may pass --release to specify the release to operate on; otherwise, relval will attempt to discover the 'next' release, and use that. You may optionally specify a milestone to operate against, with --milestone (Alpha|Beta|Final) (it does not accept Branched or Rawhide, but if you do not pass --milestone at all, Branched and Rawhide result pages will be included). You may also pass the --filter option as many times as you like. If passed, only pages whose name matches any of the --filter parameters will be included. For instance, relval user-stats --release 21 --milestone Beta --filter TC3 --filter Desktop will operate against all Fedora 21 Beta pages with "TC3" or "Desktop" in their names. You may pass --bot to include 'bot' results (those from automated test systems) in the statistics; by default they are excluded.

The result will be a simple HTML page source printed directly to the console which you can save or paste into for e.g. a blog post, containing statistics on the users who contributed results to the chosen set of pages.

testcase-stats

For test coverage statistics generation, use relval testcase-stats. The parameters are the same as those for user-stats. The output will be an entire directory of HTML pages in /tmp with a top-level index.html that links to summary pages for each "test type", and detailed pages for each "unique test" that are linked from the summary pages. You can also pass --out to specify an output directory, which will be deleted if it already exists. You can simply place the entire directory on your web server in a sensible location. Note that the top-level directory will have 0700 permissions by default and you may have to change this before the content will be visible on the server.

report-results

report-results lets you...report results. It edits the result pages in the wiki for you. Why yes, a hacky TUI that pretends mediawiki is a structured data store is a deeply ridiculous thing, thank you for asking.

You may pass --release, --milestone, --compose and --testtype if you like. If you don't fully specify a compose version, it will first attempt to detect the 'current' compose and offer to let you report results against that; if you want to report against a different compose, it will prompt you for the details.

Once you've chosen a compose to report against one way or another, it will then ask you which page section to report a result in, and then which test to report a result for, then what type of result to submit, then whether you want to specify associated bug IDs and/or a comment. And then it will submit the result. Once you're done, you can submit another result for the same section, page, or test type (avoiding the need to re-input those choices).

Please do keep an eye on the actual result wiki pages and make sure the tool edited them correctly.

size-check

size-check checks the size of the image files for a given compose, and reports the results to the wiki.

You may pass --release, --milestone, and --compose to specify the compose to operate on. If you pass none of them, relval will check the 'current' compose. If you pass only some, wikitcms will try and guess what compose you meant, and the command will fail if it cannot.

You may also pass --bugzilla, which will report bugs to Bugzilla for oversize images. If --test is also passed, the bugs will be reported to partner-bugzilla.redhat.com (which is effectively a sandbox instance); otherwise they will be reported to bugzilla.redhat.com, so please do not do this unless you're really sure it's necessary. This uses python-bugzilla: please see its documentation for information on authentication. If you do not provide some form of authentication information in a python-bugzilla configuration file and no valid tokens are stored locally from a recent successful login, you will be prompted to enter a username and password interactively.

Note that there is now automation in place to run size-check automatically when validation events are created, so it is unusual for it to be necessary to run it manually any more.

Credits

The user-stats and testcase-stats sub-commands are re-implementations of work originally done by Kamil Paral and Josef Skladanka, and incorporate sections of the original implementations, which can be found in the history of the qa-stats git repository.

License

All copyrightable content in relval is released under the GPL, version 3 or later. With reference to the US Copyright Office's 2025 report on the copyrightability of AI-generated content, to the extent that this project contains any such content, it is asserted that the project as a whole constitutes "a larger human-generated work" and is copyrightable. Copyright is explicitly asserted to the maximum possible extent in all human-generated or human-assisted elements of this project.