Skip to main content
Version: 0.0.2

Other

Calibre Web Automated

I knew about calibre-web for some time and I was aware of this tool, but at the time that I found them they were so early and not usable to me. I recently came upon it in a reddit thread and just wow. This handles so much more than komga and readarr, automatically fetching good metadata for downloaded books, an automated ingestion flow, and it even has an automated send to kindle devices, which was mind blowing. It took me only an hour or two to figure out the settings, configuring reverse proxy auth, setting up the metadata.db in it's own folder away from the books (this was important to avoid sqlite lockups with mergerfs), and understanding the ingestion flow. It essentially doesn't allow me to hardlink, but with books that's nothing to me, they barely take up any space in duplicating size. The ingestion flow is essentially a folder that is watched, and the moment any book appears, cwa comes in and reads the book and moves it in to the Library. It's essentially a landing zone for new books. A major plus was also finding shelfmark that is so amazing to use with it.

Docusauras

I never really used any SSG tools until I was trying to find a better way to write documentation like this and allow it to be easy to navigate. I initially started with trying to mess around with mkdocs but found that it was limiting in it's support unless I planned to use mkdocs-material or if dealing with that have to brace for migrating it since mkdocs will be going through a rewrite. So I just looked up services that people recommend for something like this and lo and behold found docusaurus. That coupled with Cloudflare Pages which was excitingly free, was a quick and easy way to get my documentation in a really nice place. Liking it so far!

Homarr

warning

Deprecated

Homarr was the first dashboard solution that I used for my services. It was really great to have a central place with all of the links to every service that I have deployed. In addition, it allowed custom views depending on the user, so I could make a bookmark page for users who only needed to see some public services, while for my admin view I could see everything. In addition to link, homarr provides widgets that let you get insights into some of the services, e.g. list of torrents being seeded/downloading, requests that are pending from jellyseerr, or checking the upload and download speed of my torrent client. With it's SSO configuration it was very easy to configure. One small downside is the need to modify apps and widgets for every screen size whenever I made a change. Looking into potentially coming back to this as it's v1 version has promising features.

Homepage

After using homarr for a long time I realized it's widgets were overdone, and exposed more information that I was conformtable with displaying. Looking through dashboard services I eventually stumbled onto homepage that had the perfect widget configuration and the right amount of information that needed to be displayed. In addition with configurations sitting nicely in a single yaml file it was really easy to make adjustments to the layout of the page. Also, homepage had some really cool features in gathering kubernetes metrics so with everything considered it was the best service to use.

iframe

warning

Deprecated 2/20/26

No link on this one since I fully built this out (well vibe coded). It was a really nice way to add quick views and UX to homepage without causing a bunch of issues. What I'm using it right now for is providing controls for turning on and off vms in openmediavault when Shelly wants to use to play games and also using this get permissions to hide UI in homepage if they aren't an admin. The UI looks pretty good and just using fastapi with htmx this came to be really useful and well built.

Komga

warning

Deprecated 2/1/26

A recent addition to my services, komga is an e-book and comic reader service. It also beautifully organizes downloaded books into an interface for me to use. I recently started to try and read more often and this was a really great service to use. Unfortunately it cannot automatically download books I'm looking for, this would be perfect for something like readarr. But due to metadata issues with that service it was basically unusable. Right now I just use prowlarr to search for books to download, and sending a torrent to qbittorrent it is automatically tagged as a book. Unfortunately komga does not have automatic library scan and uses periodic scans to keep the library up to date. In order to get a better experience I wrote a small script in addition with devodev/inotify was able to trigger library rescans whenever a change was detected in the e-books folder.

I was not expecting to throw this service away, but I found calibre-web- automated that does what komga offers but even more. Now I have a completely automated flow without needing to set up inotify services and everything else.

Lazy Librarian

warning

Deprecated 12/25

Considering readarr is no longer working at a 100% capacity due to needing a new metadata source and in need of finding a better solution to search for books other than using prowlarr I set out to find a provider that can work. I have used lazy librarian before but I never was able to configure it correctly to allow for easy user accounts and to understand how to configure it's settings to feel like an arr app. Using the proxy header settings I was able to get automatic logins working (I needed to change the admin accout name to match the username in authentik). It took a bit of tinkering but I was able to find the settings to reduce the delay in processing downloads so that it would constantly check if downloads were completed and was faster to mark a download as failed. A major setting I found that I needed to tweak was the minimum percentage match for search and download, that were set at 80-90%, which was the reason I was never able to find torrents easily for books. Reducing this number lazy librarian was able to automatically download books that matched what I need. It also imports and organizes the e-book files.

LazyLibrarian was a good alternative for Readarr for some time, until I tried using it more and well it's metadata isn't so great and the amount of management needed to make sure it's working fine wasn't worth the effort of keeping it, especially with Readarr's metadata source getting better.

Nextcloud

There really isn't a good self hosted solution for a file storage and viewer. Nextcloud provided all of those features in addition to be an office suite so it was a no-brainer in getting it configured. It allowed for synchronizing data between my nas and devices and viewing all data I have stored in it. It took me a long time to get a performant instance running as it requires a decent amount of tweaks to it's configuration such as adding a postgresql database, redis caching, and settings. In addition it is able to integrate with authelia or authentik by using an ldap service to authenticate against. Owncloud was an alternative but due to it not support local storage, and even other services don't support it, it was a no go.

Readarr

warning

Deprecated 2/1/26

This was the defacto service for managing an e-book library, but has overtime lost it's ability to do so. readarr was the equivalent to sonarr for tv shows, but due to readarr's dependence on goodreads api to power all of it's metadata needs it is no longer a viable solution. Goodreads shutdown it's api and so readarr desperately needs a new source, and with that many books can't be search or even organized into the library due to it not being able to identify it. I tested it out for a bit just to get a sense of how much I could accomplish with it, it was able to identify some books but for certain authors it would be pretty hit or miss.

I recently have this a twirl again, and with the new metadata source from rreading-glasses it did breath a new life into the readarr project and has made it significantly easier to download new books.

Overtime I realized that using readarr alone still made it hard to share this with others in allowing them to download books without asking me. I did end up finding shelfmark that completely blows readarr out the park in terms of downloading support (IT DOES DIRECT DOWNLOADS AND TORRENTS!).

recreator

Another container that is not linked since I built it on my own. This was a simple container to try and fix the issues with containers that depended on networks that were not apart of their own compose file. The issue came from adding services like speedtest-tracker which was in it's own compose file separate from torrent where when the qbittorrent instance generated a new network, speedtest-tracker would completely stop working. What sucks was the only way to fix it was to tear down the container and rebuild it so that it was aware of the new network. recreator does exactly that, and with a simple docker label I have it running to automatically tear down and recreate containers that run into network issues, so I no longer need to go in an manually fix containers like this.

Shelfmark

This was amazing tool I found when revisiting calibre-web-automated, it's essentially a tool like jellyseerr for books. While it doesn't have a discovery feature, which would be amazing to have, it allows you to freely browse for books and attempts to download them either through a direct source or through indexers it has access to from prowlarr. The direct download feature is insanely good, especially since anna's archive is purely through direct downloads and it has such a vast library. In addition in works so well with cwa in that it automates the whole ingestion process for both torrents and downloads so there's no need for me to setup any scripts to help out. Just like cwa I just setup reverse proxy auth and everything was all set to go.

OliveTin

As I was figuring out how to setup a container to trigger scripts from webhooks I stumbled upon this, which was basically a replacement for my custom setup with iframe as well. It basically gives you a dashboard to trigger actions complete with user controls and everything. It did take some time to figure out how to setup actions, using the api, and setting up crons but it's honestly perfect for my usecase. It covered hooks and exposing actions to anyone that needs to run something on the server. Using an ssh setup I can connect to the host and execute anything I need. This was really an amazing find and I can trigger crons on my own when needed.

WikiJS

warning

Deprecated 6/1/25

I never really thought I needed a service to document anything related to work I do in personal projects. But overtime as complexity started growing and newer services being added to the pool I realized I can easily forget everything if I decide to step away from it all in a couple of months. In looking for a good self hosted documentation one really important feature was to allow me to view it without actually needing it deployed. For a while I was thinking of using github to just store everything by I found wikijs that came with the ability to synchronize all of my documentation into a github repo. So with the ability to manage a pretty looking website that can organize my documentation, provide searching capabilities, and with a backup solution with github I thought it was perfect to configure this. It was a breeze to pickup on how to create and modify pages, and in addition it allowed for setting up SSO so I could share this documentation with people using my services. While this was a great application to run I realized that it probably is perfectly fine for me to modify the markdown files in the repository itself instead of running wikijs. I've thus decided to deprecate using the application.

Windows

This docker image is a wrapper around qemu which provides linux vms, but instead does so for windows. With two gpus and no longer running whisper with a gpu llm model anymore I decided to look into a simple way of setting up a windows gaming vm to allow for easy gaming on the server. Now there was first the initial challenge of attempting to setup gpu passthrough for the nvidia 3060 which surprisingly took less than an hour to do. Because I was not planning on using the gpu on the host machine at all I had to:

warning

Below steps are deprecated

  1. Create this /etc/modprobe.d/vfio and add this line to it:

    options vfio-pci ids=10de:2487,10de:228b

    # If using open source drivers
    softdep nouveau pre: vfio-pci
    softdep nouveau\* pre: vfio-pci

    # If using nvidia drivers
    softdep nvidia pre: vfio-pci
    • This loads the vfio kernel module before any gpu drivers can make use of it. This would allow passing it into any kvms I have setup.

With a reboot the gpu was now ready to be passed through into the docker container with qemu and adding in the ARGUMENTS environment variable the pcie iommu group the gpu, and passing in the /dev/vfio device to the service it was able to be passed through to the windows vm. In addition, for sunshine streaming I had to open a bunch of extra ports and pass in my wireless keyboard into the vm with /dev/bus/usb and adding another qemu argument by specifying the vendor and product id in the ARGUMENTS variable. This is still in progress as moonlight is oddly still having issues establishing a stream to the vm.

Update here, I wanted to figure out how to get better usage out of the gpu when the windows kvm wasn't in use, so I ended researching a bit more and came across dynamic vfio binding. This allows you to use the gpu as normal on the host and through a couple of commands, when the gpu is needed for the vm, unload the gpu drivers and bind it to vfio and have it available for gpu passthrough. This really only works because I have a second gpu but still this was a really great way to get better usage out of my system. Using olivetin action api I can trigger these scripts to run on the host binding the vfio drivers to the gpu before it boots and unbinding the moment it shutsdown. I was able to create an eloquent solution with s6 overlay that perfectly handles startup and finish tasks for processes.