Releasing Raku modules with fez

Argo Raku — Published on .

Last week I got a message on Matrix, asking me to update one of my Raku modules, Config::Parser::TOML. One of the dependencies had been updated, and the old one is no longer available through the module installer zef. Its not that big a change, and there are tests available, so its a reasonably small fix on itself.

Recently I’ve set up Argo Workflows for my CI/CD desires, and I found this a good and simple Raku project to try and incorporate into a workflow. Since I had some additional quality checks ready to use in my workflow, this has resulted in REUSE compliance for this Raku module, in addition to the regular prove tests already available in the project. Additionally, the de facto default module authoring tool fez also brings a few new checks that have been incorporated.

While all that is good, there were some annoyances I encountered while configuring this. Notably, I’ve found fez to be a chore to work with when it comes to non-interactive use. All CI/CD jobs run in their own Kubernetes pods, and should not require any interaction from myself during these runs. I am writing this blog post mainly to write down the annoyances I encountered, hoping that fez can be improved in the future.

Lets start with the first issue I encountered while setting up the workflow: zef install fez fails by default. zef gives the advice to --exclude one of the dependencies, and going by the issues reported on their Github repository, this seems to be accepted workaround. However, I’d argue that this workaround should not be needed to begin with. Especially seeing as fez works fine and I have absolutely no clue what this z is or how I can supply it. Either drop this dependency, or document its use and upstream so people can package it.

The second issue I encountered was with the login functionality of fez. There seems to be no way to handle this non-interactively. The way around this for me has become to use expect scripts, but this is obviously not very pretty and will break whenever the interactive interface of fez changes. A good means of non-interactive authentication would be great to have. I’ve considered to just mount fez’s config/cache into the containers, but the documentation warns that tokens aren’t permanent to begin with.

Next up there’s the actual upload command. I’m running it twice in my workflow, once with --dry-run and once with --force. The first one is done as a preliminary quality check to see if there’s any obvious issues that ought to be fixed beforehand. I noticed on a subsequent run (the one with --force) that the dry run isn’t all that dry. It leaves an sdist directory, which in turn will get included in the next step. There’s a flag to create this sdist directory, but no flag to do the inverse. My solution is to end this step with rm -fr -- sdist to clean it up again.

And lastly, when all quality assurance checks have passed, the fez upload --force command is ran on the working directory. I’d rather not force anything here, but the alternative is that another interactive question pops up and the job hangs forever. I don’t know all the possible prompts fez can generate, and for this one I didn’t even bother to try and look that up. Rather than a --force to practically say “yes” to everything, I’d prefer an option to say “no” to everything, failing the pipeline immediately.

Another pet-peeve of mine is that fez seemingly doesn’t use exit codes. No matter what happens, even something quite important such as login with incorrect credentials, it always returns 0 as exit code. This should obviously be fixed sooner rather than later, as it is quite simple and it is the basis for many systems to check the exit code to deduce something is wrong.

Uploads of module updates are currently working, which is good, but I feel like a lot of workaround code I had to write should not be necessary. If fez can fix these issues, it will be much more of a breeze to use, which in turn hopefully encourages more automated testing and distributing of Raku modules. This can be a great boon for the module ecosystem and overall community.